Jan 31 07:16:58 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 07:16:58 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 07:16:58 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 07:16:58 localhost kernel: BIOS-provided physical RAM map:
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 07:16:58 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 07:16:58 localhost kernel: NX (Execute Disable) protection: active
Jan 31 07:16:58 localhost kernel: APIC: Static calls initialized
Jan 31 07:16:58 localhost kernel: SMBIOS 2.8 present.
Jan 31 07:16:58 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 07:16:58 localhost kernel: Hypervisor detected: KVM
Jan 31 07:16:58 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 07:16:58 localhost kernel: kvm-clock: using sched offset of 13677047647 cycles
Jan 31 07:16:58 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 07:16:58 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 31 07:16:58 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 31 07:16:58 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 31 07:16:58 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 07:16:58 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 07:16:58 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 07:16:58 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 07:16:58 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 07:16:58 localhost kernel: Using GB pages for direct mapping
Jan 31 07:16:58 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 07:16:58 localhost kernel: ACPI: Early table checksum verification disabled
Jan 31 07:16:58 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 07:16:58 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:16:58 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:16:58 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:16:58 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 07:16:58 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:16:58 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:16:58 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 07:16:58 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 07:16:58 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 07:16:58 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 07:16:58 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 07:16:58 localhost kernel: No NUMA configuration found
Jan 31 07:16:58 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 07:16:58 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 31 07:16:58 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 07:16:58 localhost kernel: Zone ranges:
Jan 31 07:16:58 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 07:16:58 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 07:16:58 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 07:16:58 localhost kernel:   Device   empty
Jan 31 07:16:58 localhost kernel: Movable zone start for each node
Jan 31 07:16:58 localhost kernel: Early memory node ranges
Jan 31 07:16:58 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 07:16:58 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 07:16:58 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 07:16:58 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 07:16:58 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 07:16:58 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 07:16:58 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 07:16:58 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 07:16:58 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 07:16:58 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 07:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 07:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 07:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 07:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 07:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 07:16:58 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 07:16:58 localhost kernel: TSC deadline timer available
Jan 31 07:16:58 localhost kernel: CPU topo: Max. logical packages:   8
Jan 31 07:16:58 localhost kernel: CPU topo: Max. logical dies:       8
Jan 31 07:16:58 localhost kernel: CPU topo: Max. dies per package:   1
Jan 31 07:16:58 localhost kernel: CPU topo: Max. threads per core:   1
Jan 31 07:16:58 localhost kernel: CPU topo: Num. cores per package:     1
Jan 31 07:16:58 localhost kernel: CPU topo: Num. threads per package:   1
Jan 31 07:16:58 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 07:16:58 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 07:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 07:16:58 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 07:16:58 localhost kernel: Booting paravirtualized kernel on KVM
Jan 31 07:16:58 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 07:16:58 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 07:16:58 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 07:16:58 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 31 07:16:58 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 31 07:16:58 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 07:16:58 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 07:16:58 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 07:16:58 localhost kernel: random: crng init done
Jan 31 07:16:58 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 07:16:58 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 07:16:58 localhost kernel: Fallback order for Node 0: 0 
Jan 31 07:16:58 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 07:16:58 localhost kernel: Policy zone: Normal
Jan 31 07:16:58 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 07:16:58 localhost kernel: software IO TLB: area num 8.
Jan 31 07:16:58 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 07:16:58 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 07:16:58 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 07:16:58 localhost kernel: Dynamic Preempt: voluntary
Jan 31 07:16:58 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 07:16:58 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 31 07:16:58 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 07:16:58 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 31 07:16:58 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 31 07:16:58 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 31 07:16:58 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 07:16:58 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 07:16:58 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 07:16:58 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 07:16:58 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 07:16:58 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 07:16:58 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 07:16:58 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 07:16:58 localhost kernel: Console: colour VGA+ 80x25
Jan 31 07:16:58 localhost kernel: printk: console [ttyS0] enabled
Jan 31 07:16:58 localhost kernel: ACPI: Core revision 20230331
Jan 31 07:16:58 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 07:16:58 localhost kernel: x2apic enabled
Jan 31 07:16:58 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 07:16:58 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 07:16:58 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 31 07:16:58 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 07:16:58 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 07:16:58 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 07:16:58 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 07:16:58 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 07:16:58 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 07:16:58 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 07:16:58 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 07:16:58 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 07:16:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 07:16:58 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 07:16:58 localhost kernel: active return thunk: retbleed_return_thunk
Jan 31 07:16:58 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 07:16:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 07:16:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 07:16:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 07:16:58 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 07:16:58 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 07:16:58 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 31 07:16:58 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 31 07:16:58 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 07:16:58 localhost kernel: landlock: Up and running.
Jan 31 07:16:58 localhost kernel: Yama: becoming mindful.
Jan 31 07:16:58 localhost kernel: SELinux:  Initializing.
Jan 31 07:16:58 localhost kernel: LSM support for eBPF active
Jan 31 07:16:58 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 07:16:58 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 07:16:58 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 07:16:58 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 07:16:58 localhost kernel: ... version:                0
Jan 31 07:16:58 localhost kernel: ... bit width:              48
Jan 31 07:16:58 localhost kernel: ... generic registers:      6
Jan 31 07:16:58 localhost kernel: ... value mask:             0000ffffffffffff
Jan 31 07:16:58 localhost kernel: ... max period:             00007fffffffffff
Jan 31 07:16:58 localhost kernel: ... fixed-purpose events:   0
Jan 31 07:16:58 localhost kernel: ... event mask:             000000000000003f
Jan 31 07:16:58 localhost kernel: signal: max sigframe size: 1776
Jan 31 07:16:58 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 31 07:16:58 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 31 07:16:58 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 31 07:16:58 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 31 07:16:58 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 07:16:58 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 07:16:58 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 31 07:16:58 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 31 07:16:58 localhost kernel: Memory: 7763936K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618404K reserved, 0K cma-reserved)
Jan 31 07:16:58 localhost kernel: devtmpfs: initialized
Jan 31 07:16:58 localhost kernel: x86/mm: Memory block size: 128MB
Jan 31 07:16:58 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 07:16:58 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 07:16:58 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 07:16:58 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 07:16:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 07:16:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 07:16:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 07:16:58 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 31 07:16:58 localhost kernel: audit: type=2000 audit(1769843818.446:1): state=initialized audit_enabled=0 res=1
Jan 31 07:16:58 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 07:16:58 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 07:16:58 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 07:16:58 localhost kernel: cpuidle: using governor menu
Jan 31 07:16:58 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 07:16:58 localhost kernel: PCI: Using configuration type 1 for base access
Jan 31 07:16:58 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 31 07:16:58 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 07:16:58 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 07:16:58 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 07:16:58 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 07:16:58 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 07:16:58 localhost kernel: Demotion targets for Node 0: null
Jan 31 07:16:58 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 07:16:58 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 31 07:16:58 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 31 07:16:58 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 07:16:58 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 07:16:58 localhost kernel: ACPI: Interpreter enabled
Jan 31 07:16:58 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 07:16:58 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 07:16:58 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 07:16:58 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 07:16:58 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 07:16:58 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 07:16:58 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [3] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [4] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [5] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [6] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [7] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [8] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [9] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [10] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [11] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [12] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [13] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [14] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [15] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [16] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [17] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [18] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [19] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [20] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [21] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [22] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [23] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [24] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [25] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [26] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [27] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [28] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [29] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [30] registered
Jan 31 07:16:58 localhost kernel: acpiphp: Slot [31] registered
Jan 31 07:16:58 localhost kernel: PCI host bridge to bus 0000:00
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 07:16:58 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 07:16:58 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 07:16:58 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 07:16:58 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 07:16:58 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 07:16:58 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 07:16:58 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 07:16:58 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 07:16:58 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 07:16:58 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 07:16:58 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 07:16:58 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 07:16:58 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 07:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 07:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 07:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 07:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 07:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 07:16:58 localhost kernel: iommu: Default domain type: Translated
Jan 31 07:16:58 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 07:16:58 localhost kernel: SCSI subsystem initialized
Jan 31 07:16:58 localhost kernel: ACPI: bus type USB registered
Jan 31 07:16:58 localhost kernel: usbcore: registered new interface driver usbfs
Jan 31 07:16:58 localhost kernel: usbcore: registered new interface driver hub
Jan 31 07:16:58 localhost kernel: usbcore: registered new device driver usb
Jan 31 07:16:58 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 07:16:58 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 07:16:58 localhost kernel: PTP clock support registered
Jan 31 07:16:58 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 31 07:16:58 localhost kernel: NetLabel: Initializing
Jan 31 07:16:58 localhost kernel: NetLabel:  domain hash size = 128
Jan 31 07:16:58 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 07:16:58 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 07:16:58 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 31 07:16:58 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 31 07:16:58 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 31 07:16:58 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 07:16:58 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 07:16:58 localhost kernel: vgaarb: loaded
Jan 31 07:16:58 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 07:16:58 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 07:16:58 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 07:16:58 localhost kernel: pnp: PnP ACPI init
Jan 31 07:16:58 localhost kernel: pnp 00:03: [dma 2]
Jan 31 07:16:58 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 31 07:16:58 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 07:16:58 localhost kernel: NET: Registered PF_INET protocol family
Jan 31 07:16:58 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 07:16:58 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 07:16:58 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 07:16:58 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 07:16:58 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 07:16:58 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 07:16:58 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 07:16:58 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 07:16:58 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 07:16:58 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 07:16:58 localhost kernel: NET: Registered PF_XDP protocol family
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 07:16:58 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 07:16:58 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 07:16:58 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 07:16:58 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 26481 usecs
Jan 31 07:16:58 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 31 07:16:58 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 07:16:58 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 07:16:58 localhost kernel: ACPI: bus type thunderbolt registered
Jan 31 07:16:58 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 31 07:16:58 localhost kernel: Initialise system trusted keyrings
Jan 31 07:16:58 localhost kernel: Key type blacklist registered
Jan 31 07:16:58 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 07:16:58 localhost kernel: zbud: loaded
Jan 31 07:16:58 localhost kernel: integrity: Platform Keyring initialized
Jan 31 07:16:58 localhost kernel: integrity: Machine keyring initialized
Jan 31 07:16:58 localhost kernel: Freeing initrd memory: 88000K
Jan 31 07:16:58 localhost kernel: NET: Registered PF_ALG protocol family
Jan 31 07:16:58 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 31 07:16:58 localhost kernel: Key type asymmetric registered
Jan 31 07:16:58 localhost kernel: Asymmetric key parser 'x509' registered
Jan 31 07:16:58 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 07:16:58 localhost kernel: io scheduler mq-deadline registered
Jan 31 07:16:58 localhost kernel: io scheduler kyber registered
Jan 31 07:16:58 localhost kernel: io scheduler bfq registered
Jan 31 07:16:58 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 07:16:58 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 07:16:58 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 07:16:58 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 31 07:16:58 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 07:16:58 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 07:16:58 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 07:16:58 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 07:16:58 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 07:16:58 localhost kernel: Non-volatile memory driver v1.3
Jan 31 07:16:58 localhost kernel: rdac: device handler registered
Jan 31 07:16:58 localhost kernel: hp_sw: device handler registered
Jan 31 07:16:58 localhost kernel: emc: device handler registered
Jan 31 07:16:58 localhost kernel: alua: device handler registered
Jan 31 07:16:58 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 07:16:58 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 07:16:58 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 07:16:58 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 07:16:58 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 07:16:58 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 07:16:58 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 31 07:16:58 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 07:16:58 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 07:16:58 localhost kernel: hub 1-0:1.0: USB hub found
Jan 31 07:16:58 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 31 07:16:58 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 07:16:58 localhost kernel: usbserial: USB Serial support registered for generic
Jan 31 07:16:58 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 07:16:58 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 07:16:58 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 07:16:58 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 07:16:58 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 07:16:58 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 07:16:58 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 07:16:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 07:16:58 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T07:16:58 UTC (1769843818)
Jan 31 07:16:58 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 07:16:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 07:16:58 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 07:16:58 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 07:16:58 localhost kernel: usbcore: registered new interface driver usbhid
Jan 31 07:16:58 localhost kernel: usbhid: USB HID core driver
Jan 31 07:16:58 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 31 07:16:58 localhost kernel: Initializing XFRM netlink socket
Jan 31 07:16:58 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 31 07:16:58 localhost kernel: Segment Routing with IPv6
Jan 31 07:16:58 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 31 07:16:58 localhost kernel: mpls_gso: MPLS GSO support
Jan 31 07:16:58 localhost kernel: IPI shorthand broadcast: enabled
Jan 31 07:16:58 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 07:16:58 localhost kernel: AES CTR mode by8 optimization enabled
Jan 31 07:16:58 localhost kernel: sched_clock: Marking stable (977004939, 155323361)->(1214555847, -82227547)
Jan 31 07:16:58 localhost kernel: registered taskstats version 1
Jan 31 07:16:58 localhost kernel: Loading compiled-in X.509 certificates
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 07:16:58 localhost kernel: Demotion targets for Node 0: null
Jan 31 07:16:58 localhost kernel: page_owner is disabled
Jan 31 07:16:58 localhost kernel: Key type .fscrypt registered
Jan 31 07:16:58 localhost kernel: Key type fscrypt-provisioning registered
Jan 31 07:16:58 localhost kernel: Key type big_key registered
Jan 31 07:16:58 localhost kernel: Key type encrypted registered
Jan 31 07:16:58 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 07:16:58 localhost kernel: Loading compiled-in module X.509 certificates
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 07:16:58 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 31 07:16:58 localhost kernel: ima: No architecture policies found
Jan 31 07:16:58 localhost kernel: evm: Initialising EVM extended attributes:
Jan 31 07:16:58 localhost kernel: evm: security.selinux
Jan 31 07:16:58 localhost kernel: evm: security.SMACK64 (disabled)
Jan 31 07:16:58 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 07:16:58 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 07:16:58 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 07:16:58 localhost kernel: evm: security.apparmor (disabled)
Jan 31 07:16:58 localhost kernel: evm: security.ima
Jan 31 07:16:58 localhost kernel: evm: security.capability
Jan 31 07:16:58 localhost kernel: evm: HMAC attrs: 0x1
Jan 31 07:16:58 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 07:16:58 localhost kernel: Running certificate verification RSA selftest
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 07:16:58 localhost kernel: Running certificate verification ECDSA selftest
Jan 31 07:16:58 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 07:16:58 localhost kernel: clk: Disabling unused clocks
Jan 31 07:16:58 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 31 07:16:58 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 07:16:58 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 31 07:16:58 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 07:16:58 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 07:16:58 localhost kernel: Run /init as init process
Jan 31 07:16:58 localhost kernel:   with arguments:
Jan 31 07:16:58 localhost kernel:     /init
Jan 31 07:16:58 localhost kernel:   with environment:
Jan 31 07:16:58 localhost kernel:     HOME=/
Jan 31 07:16:58 localhost kernel:     TERM=linux
Jan 31 07:16:58 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 31 07:16:58 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 07:16:58 localhost systemd[1]: Detected virtualization kvm.
Jan 31 07:16:58 localhost systemd[1]: Detected architecture x86-64.
Jan 31 07:16:58 localhost systemd[1]: Running in initrd.
Jan 31 07:16:58 localhost systemd[1]: No hostname configured, using default hostname.
Jan 31 07:16:58 localhost systemd[1]: Hostname set to <localhost>.
Jan 31 07:16:58 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 31 07:16:58 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 31 07:16:58 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 07:16:58 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 07:16:58 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 07:16:58 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 07:16:58 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 31 07:16:58 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 07:16:58 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 07:16:58 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 31 07:16:58 localhost systemd[1]: Reached target Local File Systems.
Jan 31 07:16:58 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 07:16:58 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 07:16:58 localhost systemd[1]: Reached target Path Units.
Jan 31 07:16:58 localhost systemd[1]: Reached target Slice Units.
Jan 31 07:16:58 localhost systemd[1]: Reached target Swaps.
Jan 31 07:16:58 localhost systemd[1]: Reached target Timer Units.
Jan 31 07:16:58 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 07:16:58 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 31 07:16:58 localhost systemd[1]: Listening on Journal Socket.
Jan 31 07:16:58 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 07:16:58 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 07:16:58 localhost systemd[1]: Reached target Socket Units.
Jan 31 07:16:58 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 07:16:58 localhost systemd[1]: Starting Journal Service...
Jan 31 07:16:58 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 07:16:58 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 07:16:58 localhost systemd[1]: Starting Create System Users...
Jan 31 07:16:58 localhost systemd[1]: Starting Setup Virtual Console...
Jan 31 07:16:58 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 07:16:58 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 07:16:58 localhost systemd[1]: Finished Create System Users.
Jan 31 07:16:58 localhost systemd-journald[303]: Journal started
Jan 31 07:16:58 localhost systemd-journald[303]: Runtime Journal (/run/log/journal/9a8690d798044b35b7c96b26f70c3d7e) is 8.0M, max 153.6M, 145.6M free.
Jan 31 07:16:58 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Jan 31 07:16:58 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Jan 31 07:16:58 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 07:16:58 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 07:16:59 localhost systemd[1]: Started Journal Service.
Jan 31 07:16:59 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 07:16:59 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 07:16:59 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 07:16:59 localhost systemd[1]: Finished Setup Virtual Console.
Jan 31 07:16:59 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 07:16:59 localhost systemd[1]: Starting dracut cmdline hook...
Jan 31 07:16:59 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 07:16:59 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 07:16:59 localhost systemd[1]: Finished dracut cmdline hook.
Jan 31 07:16:59 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 31 07:16:59 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 07:16:59 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 31 07:16:59 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 07:16:59 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 31 07:16:59 localhost kernel: RPC: Registered udp transport module.
Jan 31 07:16:59 localhost kernel: RPC: Registered tcp transport module.
Jan 31 07:16:59 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 07:16:59 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 07:16:59 localhost rpc.statd[439]: Version 2.5.4 starting
Jan 31 07:16:59 localhost rpc.statd[439]: Initializing NSM state
Jan 31 07:16:59 localhost rpc.idmapd[444]: Setting log level to 0
Jan 31 07:16:59 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 31 07:16:59 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 07:16:59 localhost systemd-udevd[457]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 07:16:59 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 07:16:59 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 31 07:16:59 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 31 07:16:59 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 07:16:59 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 31 07:16:59 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 07:16:59 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 07:16:59 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 07:16:59 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 07:16:59 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 07:16:59 localhost systemd[1]: Reached target Network.
Jan 31 07:16:59 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 07:16:59 localhost systemd[1]: Starting dracut initqueue hook...
Jan 31 07:16:59 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 07:16:59 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 07:16:59 localhost kernel: libata version 3.00 loaded.
Jan 31 07:16:59 localhost systemd-udevd[481]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:16:59 localhost kernel:  vda: vda1
Jan 31 07:16:59 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 31 07:16:59 localhost kernel: scsi host0: ata_piix
Jan 31 07:16:59 localhost kernel: scsi host1: ata_piix
Jan 31 07:16:59 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 07:16:59 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 07:16:59 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 07:16:59 localhost systemd[1]: Reached target Initrd Root Device.
Jan 31 07:16:59 localhost kernel: ata1: found unknown device (class 0)
Jan 31 07:16:59 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 07:16:59 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 07:16:59 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 31 07:16:59 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 31 07:16:59 localhost systemd[1]: Reached target System Initialization.
Jan 31 07:16:59 localhost systemd[1]: Reached target Basic System.
Jan 31 07:17:00 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 07:17:00 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 07:17:00 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 07:17:00 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 31 07:17:00 localhost systemd[1]: Finished dracut initqueue hook.
Jan 31 07:17:00 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 07:17:00 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 07:17:00 localhost systemd[1]: Reached target Remote File Systems.
Jan 31 07:17:00 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 31 07:17:00 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 31 07:17:00 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 07:17:00 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 07:17:00 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 07:17:00 localhost systemd[1]: Mounting /sysroot...
Jan 31 07:17:00 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 07:17:00 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 07:17:00 localhost kernel: XFS (vda1): Ending clean mount
Jan 31 07:17:00 localhost systemd[1]: Mounted /sysroot.
Jan 31 07:17:00 localhost systemd[1]: Reached target Initrd Root File System.
Jan 31 07:17:00 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 07:17:00 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 07:17:00 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 07:17:00 localhost systemd[1]: Reached target Initrd File Systems.
Jan 31 07:17:00 localhost systemd[1]: Reached target Initrd Default Target.
Jan 31 07:17:00 localhost systemd[1]: Starting dracut mount hook...
Jan 31 07:17:00 localhost systemd[1]: Finished dracut mount hook.
Jan 31 07:17:00 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 07:17:00 localhost rpc.idmapd[444]: exiting on signal 15
Jan 31 07:17:00 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 07:17:00 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 07:17:00 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 07:17:00 localhost systemd[1]: Stopped target Network.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Timer Units.
Jan 31 07:17:00 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 07:17:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 07:17:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 07:17:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Basic System.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Path Units.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Remote File Systems.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Slice Units.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Socket Units.
Jan 31 07:17:00 localhost systemd[1]: Stopped target System Initialization.
Jan 31 07:17:00 localhost systemd[1]: Stopped target Local File Systems.
Jan 31 07:17:01 localhost systemd[1]: Stopped target Swaps.
Jan 31 07:17:01 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped dracut mount hook.
Jan 31 07:17:01 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 31 07:17:01 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 07:17:01 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 07:17:01 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 31 07:17:01 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 31 07:17:01 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 07:17:01 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 07:17:01 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 07:17:01 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 07:17:01 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 31 07:17:01 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 07:17:01 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 07:17:01 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Closed udev Control Socket.
Jan 31 07:17:01 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Closed udev Kernel Socket.
Jan 31 07:17:01 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 31 07:17:01 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 31 07:17:01 localhost systemd[1]: Starting Cleanup udev Database...
Jan 31 07:17:01 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 07:17:01 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 07:17:01 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Stopped Create System Users.
Jan 31 07:17:01 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 07:17:01 localhost systemd[1]: Finished Cleanup udev Database.
Jan 31 07:17:01 localhost systemd[1]: Reached target Switch Root.
Jan 31 07:17:01 localhost systemd[1]: Starting Switch Root...
Jan 31 07:17:01 localhost systemd[1]: Switching root.
Jan 31 07:17:01 localhost systemd-journald[303]: Journal stopped
Jan 31 07:17:02 localhost systemd-journald[303]: Received SIGTERM from PID 1 (systemd).
Jan 31 07:17:02 localhost kernel: audit: type=1404 audit(1769843821.278:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 07:17:02 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:17:02 localhost kernel: SELinux:  policy capability open_perms=1
Jan 31 07:17:02 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:17:02 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:17:02 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:17:02 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:17:02 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:17:02 localhost kernel: audit: type=1403 audit(1769843821.382:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 07:17:02 localhost systemd[1]: Successfully loaded SELinux policy in 109.290ms.
Jan 31 07:17:02 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.336ms.
Jan 31 07:17:02 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 07:17:02 localhost systemd[1]: Detected virtualization kvm.
Jan 31 07:17:02 localhost systemd[1]: Detected architecture x86-64.
Jan 31 07:17:02 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:17:02 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Stopped Switch Root.
Jan 31 07:17:02 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 07:17:02 localhost systemd[1]: Created slice Slice /system/getty.
Jan 31 07:17:02 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 31 07:17:02 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 31 07:17:02 localhost systemd[1]: Created slice User and Session Slice.
Jan 31 07:17:02 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 07:17:02 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 31 07:17:02 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 07:17:02 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 07:17:02 localhost systemd[1]: Stopped target Switch Root.
Jan 31 07:17:02 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 31 07:17:02 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 31 07:17:02 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 31 07:17:02 localhost systemd[1]: Reached target Path Units.
Jan 31 07:17:02 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 31 07:17:02 localhost systemd[1]: Reached target Slice Units.
Jan 31 07:17:02 localhost systemd[1]: Reached target Swaps.
Jan 31 07:17:02 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 31 07:17:02 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 31 07:17:02 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 31 07:17:02 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 31 07:17:02 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 31 07:17:02 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 07:17:02 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 07:17:02 localhost systemd[1]: Mounting Huge Pages File System...
Jan 31 07:17:02 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 31 07:17:02 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 31 07:17:02 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 31 07:17:02 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 07:17:02 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 07:17:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 07:17:02 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 31 07:17:02 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 31 07:17:02 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 31 07:17:02 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 07:17:02 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 31 07:17:02 localhost systemd[1]: Stopped Journal Service.
Jan 31 07:17:02 localhost kernel: fuse: init (API version 7.37)
Jan 31 07:17:02 localhost systemd[1]: Starting Journal Service...
Jan 31 07:17:02 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 07:17:02 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 31 07:17:02 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 07:17:02 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 31 07:17:02 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 07:17:02 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 07:17:02 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 07:17:02 localhost kernel: ACPI: bus type drm_connector registered
Jan 31 07:17:02 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:02 localhost systemd-journald[676]: Journal started
Jan 31 07:17:02 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 07:17:01 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 31 07:17:01 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Mounted Huge Pages File System.
Jan 31 07:17:02 localhost systemd[1]: Started Journal Service.
Jan 31 07:17:02 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 07:17:02 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 31 07:17:02 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 31 07:17:02 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 07:17:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 07:17:02 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 31 07:17:02 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 07:17:02 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 31 07:17:02 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 07:17:02 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 07:17:02 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 07:17:02 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 07:17:02 localhost systemd[1]: Mounting FUSE Control File System...
Jan 31 07:17:02 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 07:17:02 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 31 07:17:02 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 07:17:02 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 07:17:02 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 07:17:02 localhost systemd[1]: Starting Create System Users...
Jan 31 07:17:02 localhost systemd[1]: Mounted FUSE Control File System.
Jan 31 07:17:02 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 07:17:02 localhost systemd-journald[676]: Received client request to flush runtime journal.
Jan 31 07:17:02 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 07:17:02 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 07:17:02 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 07:17:02 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 07:17:02 localhost systemd[1]: Finished Create System Users.
Jan 31 07:17:02 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 07:17:02 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 07:17:02 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 07:17:02 localhost systemd[1]: Reached target Local File Systems.
Jan 31 07:17:02 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 07:17:02 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 07:17:02 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 07:17:02 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 07:17:02 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 07:17:02 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 07:17:02 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 07:17:02 localhost bootctl[693]: Couldn't find EFI system partition, skipping.
Jan 31 07:17:02 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 07:17:02 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 07:17:02 localhost systemd[1]: Starting Security Auditing Service...
Jan 31 07:17:02 localhost systemd[1]: Starting RPC Bind...
Jan 31 07:17:02 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 07:17:02 localhost auditd[699]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 07:17:02 localhost auditd[699]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 07:17:02 localhost systemd[1]: Started RPC Bind.
Jan 31 07:17:02 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 07:17:02 localhost augenrules[704]: /sbin/augenrules: No change
Jan 31 07:17:02 localhost augenrules[719]: No rules
Jan 31 07:17:02 localhost augenrules[719]: enabled 1
Jan 31 07:17:02 localhost augenrules[719]: failure 1
Jan 31 07:17:02 localhost augenrules[719]: pid 699
Jan 31 07:17:02 localhost augenrules[719]: rate_limit 0
Jan 31 07:17:02 localhost augenrules[719]: backlog_limit 8192
Jan 31 07:17:02 localhost augenrules[719]: lost 0
Jan 31 07:17:02 localhost augenrules[719]: backlog 1
Jan 31 07:17:02 localhost augenrules[719]: backlog_wait_time 60000
Jan 31 07:17:02 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 31 07:17:02 localhost augenrules[719]: enabled 1
Jan 31 07:17:02 localhost augenrules[719]: failure 1
Jan 31 07:17:02 localhost augenrules[719]: pid 699
Jan 31 07:17:02 localhost augenrules[719]: rate_limit 0
Jan 31 07:17:02 localhost augenrules[719]: backlog_limit 8192
Jan 31 07:17:02 localhost augenrules[719]: lost 0
Jan 31 07:17:02 localhost augenrules[719]: backlog 2
Jan 31 07:17:02 localhost augenrules[719]: backlog_wait_time 60000
Jan 31 07:17:02 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 31 07:17:02 localhost augenrules[719]: enabled 1
Jan 31 07:17:02 localhost augenrules[719]: failure 1
Jan 31 07:17:02 localhost augenrules[719]: pid 699
Jan 31 07:17:02 localhost augenrules[719]: rate_limit 0
Jan 31 07:17:02 localhost augenrules[719]: backlog_limit 8192
Jan 31 07:17:02 localhost augenrules[719]: lost 0
Jan 31 07:17:02 localhost augenrules[719]: backlog 2
Jan 31 07:17:02 localhost augenrules[719]: backlog_wait_time 60000
Jan 31 07:17:02 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 31 07:17:02 localhost systemd[1]: Started Security Auditing Service.
Jan 31 07:17:02 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 07:17:02 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 07:17:02 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 31 07:17:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 07:17:02 localhost systemd-udevd[727]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 07:17:02 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 07:17:02 localhost systemd[1]: Starting Update is Completed...
Jan 31 07:17:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 07:17:02 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 07:17:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 07:17:02 localhost systemd[1]: Finished Update is Completed.
Jan 31 07:17:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 07:17:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 07:17:02 localhost systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:17:02 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 07:17:02 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 07:17:02 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 07:17:02 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 07:17:02 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 07:17:02 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 07:17:02 localhost kernel: Console: switching to colour dummy device 80x25
Jan 31 07:17:02 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 07:17:02 localhost kernel: [drm] features: -context_init
Jan 31 07:17:02 localhost kernel: [drm] number of scanouts: 1
Jan 31 07:17:02 localhost kernel: [drm] number of cap sets: 0
Jan 31 07:17:02 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 07:17:02 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 07:17:02 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 31 07:17:02 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 07:17:03 localhost systemd[1]: Reached target System Initialization.
Jan 31 07:17:03 localhost systemd[1]: Started dnf makecache --timer.
Jan 31 07:17:03 localhost systemd[1]: Started Daily rotation of log files.
Jan 31 07:17:03 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 07:17:03 localhost systemd[1]: Reached target Timer Units.
Jan 31 07:17:03 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 07:17:03 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 07:17:03 localhost systemd[1]: Reached target Socket Units.
Jan 31 07:17:03 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 31 07:17:03 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 07:17:03 localhost kernel: kvm_amd: TSC scaling supported
Jan 31 07:17:03 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 31 07:17:03 localhost kernel: kvm_amd: Nested Paging enabled
Jan 31 07:17:03 localhost kernel: kvm_amd: LBR virtualization supported
Jan 31 07:17:03 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 31 07:17:03 localhost systemd[1]: Reached target Basic System.
Jan 31 07:17:03 localhost dbus-broker-lau[786]: Ready
Jan 31 07:17:03 localhost systemd[1]: Starting NTP client/server...
Jan 31 07:17:03 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 07:17:03 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 07:17:03 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 07:17:03 localhost systemd[1]: Started irqbalance daemon.
Jan 31 07:17:03 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 07:17:03 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:17:03 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:17:03 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:17:03 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 31 07:17:03 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 07:17:03 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 31 07:17:03 localhost systemd[1]: Starting User Login Management...
Jan 31 07:17:03 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 07:17:03 localhost chronyd[826]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 07:17:03 localhost chronyd[826]: Loaded 0 symmetric keys
Jan 31 07:17:03 localhost chronyd[826]: Using right/UTC timezone to obtain leap second data
Jan 31 07:17:03 localhost chronyd[826]: Loaded seccomp filter (level 2)
Jan 31 07:17:03 localhost systemd[1]: Started NTP client/server.
Jan 31 07:17:03 localhost systemd-logind[810]: New seat seat0.
Jan 31 07:17:03 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 07:17:03 localhost systemd-logind[810]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 07:17:03 localhost systemd-logind[810]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 07:17:03 localhost systemd[1]: Started User Login Management.
Jan 31 07:17:03 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 07:17:03 localhost iptables.init[796]: iptables: Applying firewall rules: [  OK  ]
Jan 31 07:17:03 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 07:17:04 localhost cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 07:17:04 +0000. Up 6.46 seconds.
Jan 31 07:17:04 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 31 07:17:04 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 31 07:17:04 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpcqpju7ly.mount: Deactivated successfully.
Jan 31 07:17:04 localhost systemd[1]: Starting Hostname Service...
Jan 31 07:17:04 localhost systemd[1]: Started Hostname Service.
Jan 31 07:17:04 np0005603654.novalocal systemd-hostnamed[850]: Hostname set to <np0005603654.novalocal> (static)
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Reached target Preparation for Network.
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Starting Network Manager...
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5273] NetworkManager (version 1.54.3-2.el9) is starting... (boot:cfd2689d-5023-49cf-871a-74cb51f0f7c6)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5276] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5447] manager[0x5561191d4000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5532] hostname: hostname: using hostnamed
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5532] hostname: static hostname changed from (none) to "np0005603654.novalocal"
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5540] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5654] manager[0x5561191d4000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5655] manager[0x5561191d4000]: rfkill: WWAN hardware radio set enabled
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5745] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5746] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5746] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5747] manager: Networking is enabled by state file
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5748] settings: Loaded settings plugin: keyfile (internal)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5811] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5835] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5853] dhcp: init: Using DHCP client 'internal'
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5858] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5873] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5887] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5902] device (lo): Activation: starting connection 'lo' (19c7276e-9b34-4ddb-9414-c336dedfbb59)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5911] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5914] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5944] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5949] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5952] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5954] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5956] device (eth0): carrier: link connected
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5960] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5966] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5981] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5987] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5988] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5990] manager: NetworkManager state is now CONNECTING
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Started Network Manager.
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.5992] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6004] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6007] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Reached target Network.
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6050] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6057] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6074] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6180] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6182] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6187] device (lo): Activation: successful, device activated.
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6195] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6197] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6200] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6201] device (eth0): Activation: successful, device activated.
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6207] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 07:17:04 np0005603654.novalocal NetworkManager[854]: <info>  [1769843824.6210] manager: startup complete
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Reached target NFS client services.
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Reached target Remote File Systems.
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 07:17:04 np0005603654.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 07:17:04 +0000. Up 7.40 seconds.
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.204         | 255.255.255.0 | global | fa:16:3e:18:29:a9 |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe18:29a9/64 |       .       |  link  | fa:16:3e:18:29:a9 |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 07:17:05 np0005603654.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 07:17:06 np0005603654.novalocal useradd[984]: new group: name=cloud-user, GID=1001
Jan 31 07:17:06 np0005603654.novalocal useradd[984]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 31 07:17:06 np0005603654.novalocal useradd[984]: add 'cloud-user' to group 'adm'
Jan 31 07:17:06 np0005603654.novalocal useradd[984]: add 'cloud-user' to group 'systemd-journal'
Jan 31 07:17:06 np0005603654.novalocal useradd[984]: add 'cloud-user' to shadow group 'adm'
Jan 31 07:17:06 np0005603654.novalocal useradd[984]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Generating public/private rsa key pair.
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: The key fingerprint is:
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: SHA256:JfV2OZ/WwGzmvK1bwJw0pZxnjtTipwulxSWCPylhg9g root@np0005603654.novalocal
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: The key's randomart image is:
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: +---[RSA 3072]----+
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |          .     .|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |       o o o + * |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |      . E * + ^ =|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |         + = / #o|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |        S . + &o=|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |           . =.* |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |            o o o|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |             . + |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |              +. |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: The key fingerprint is:
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: SHA256:y+4iCkXk6OFE9EFGExdq9DUkjeOKq3VeXmoU8I+DibM root@np0005603654.novalocal
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: The key's randomart image is:
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: +---[ECDSA 256]---+
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |.o+O.+=+         |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |.++ Boo..        |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |.oo+.+.          |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |+.o  .o          |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: | oo..o +S        |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: | oo.o +...       |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |. oo....+        |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: | +Eo.oo+         |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |o ...oooo        |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: The key fingerprint is:
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: SHA256:cHl8al1OOJaM3hyXZZOam28d1ob6wEPSKPAX9K1siLE root@np0005603654.novalocal
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: The key's randomart image is:
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: +--[ED25519 256]--+
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |          .    .+|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |         + + + =.|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |      o + = X O  |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |       = * & @   |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |        E O X +..|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |         + = o.oo|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |            +.o.o|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |            .o o.|
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: |             .o  |
Jan 31 07:17:06 np0005603654.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Reached target Network is Online.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting System Logging Service...
Jan 31 07:17:06 np0005603654.novalocal sm-notify[1000]: Version 2.5.4 starting
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting Permit User Sessions...
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Finished Permit User Sessions.
Jan 31 07:17:06 np0005603654.novalocal sshd[1002]: Server listening on 0.0.0.0 port 22.
Jan 31 07:17:06 np0005603654.novalocal sshd[1002]: Server listening on :: port 22.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Started Command Scheduler.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Started Getty on tty1.
Jan 31 07:17:06 np0005603654.novalocal crond[1005]: (CRON) STARTUP (1.5.7)
Jan 31 07:17:06 np0005603654.novalocal crond[1005]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 31 07:17:06 np0005603654.novalocal crond[1005]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 76% if used.)
Jan 31 07:17:06 np0005603654.novalocal crond[1005]: (CRON) INFO (running with inotify support)
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Reached target Login Prompts.
Jan 31 07:17:06 np0005603654.novalocal rsyslogd[1001]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1001" x-info="https://www.rsyslog.com"] start
Jan 31 07:17:06 np0005603654.novalocal rsyslogd[1001]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Started System Logging Service.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Reached target Multi-User System.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 07:17:06 np0005603654.novalocal rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:17:06 np0005603654.novalocal kdumpctl[1013]: kdump: No kdump initial ramdisk found.
Jan 31 07:17:06 np0005603654.novalocal kdumpctl[1013]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 07:17:06 np0005603654.novalocal cloud-init[1133]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 07:17:06 +0000. Up 9.29 seconds.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 07:17:06 np0005603654.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 07:17:07 np0005603654.novalocal dracut[1261]: dracut-057-102.git20250818.el9
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1324]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 07:17:07 +0000. Up 9.65 seconds.
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1330]: Unable to negotiate with 38.102.83.114 port 42396: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1336]: Connection closed by 38.102.83.114 port 42406 [preauth]
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1338]: Unable to negotiate with 38.102.83.114 port 42416: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1340]: #############################################################
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1342]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1346]: 256 SHA256:y+4iCkXk6OFE9EFGExdq9DUkjeOKq3VeXmoU8I+DibM root@np0005603654.novalocal (ECDSA)
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1343]: Unable to negotiate with 38.102.83.114 port 42420: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1349]: 256 SHA256:cHl8al1OOJaM3hyXZZOam28d1ob6wEPSKPAX9K1siLE root@np0005603654.novalocal (ED25519)
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1351]: 3072 SHA256:JfV2OZ/WwGzmvK1bwJw0pZxnjtTipwulxSWCPylhg9g root@np0005603654.novalocal (RSA)
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1353]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1354]: #############################################################
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1306]: Connection closed by 38.102.83.114 port 42392 [preauth]
Jan 31 07:17:07 np0005603654.novalocal cloud-init[1324]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 07:17:07 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.81 seconds
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1372]: Unable to negotiate with 38.102.83.114 port 42442: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 31 07:17:07 np0005603654.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 07:17:07 np0005603654.novalocal systemd[1]: Reached target Cloud-init target.
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1377]: Unable to negotiate with 38.102.83.114 port 42448: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1348]: Connection closed by 38.102.83.114 port 42424 [preauth]
Jan 31 07:17:07 np0005603654.novalocal sshd-session[1364]: Connection closed by 38.102.83.114 port 42428 [preauth]
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 07:17:07 np0005603654.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: memstrack is not available
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: memstrack is not available
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: *** Including module: systemd ***
Jan 31 07:17:08 np0005603654.novalocal dracut[1263]: *** Including module: fips ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: systemd-initrd ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: i18n ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: drm ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: prefixdevname ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: kernel-modules ***
Jan 31 07:17:09 np0005603654.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 31 07:17:09 np0005603654.novalocal chronyd[826]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Jan 31 07:17:09 np0005603654.novalocal chronyd[826]: System clock TAI offset set to 37 seconds
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: kernel-modules-extra ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: qemu ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: fstab-sys ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: rootfs-block ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: terminfo ***
Jan 31 07:17:09 np0005603654.novalocal dracut[1263]: *** Including module: udev-rules ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: Skipping udev rule: 91-permissions.rules
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: virtiofs ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: dracut-systemd ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: usrmount ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: base ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: fs-lib ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: kdumpbase ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:   microcode_ctl module: mangling fw_dir
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 07:17:10 np0005603654.novalocal dracut[1263]: *** Including module: openssl ***
Jan 31 07:17:11 np0005603654.novalocal dracut[1263]: *** Including module: shutdown ***
Jan 31 07:17:11 np0005603654.novalocal dracut[1263]: *** Including module: squash ***
Jan 31 07:17:11 np0005603654.novalocal dracut[1263]: *** Including modules done ***
Jan 31 07:17:11 np0005603654.novalocal dracut[1263]: *** Installing kernel module dependencies ***
Jan 31 07:17:11 np0005603654.novalocal dracut[1263]: *** Installing kernel module dependencies done ***
Jan 31 07:17:11 np0005603654.novalocal dracut[1263]: *** Resolving executable dependencies ***
Jan 31 07:17:12 np0005603654.novalocal dracut[1263]: *** Resolving executable dependencies done ***
Jan 31 07:17:12 np0005603654.novalocal dracut[1263]: *** Generating early-microcode cpio image ***
Jan 31 07:17:12 np0005603654.novalocal dracut[1263]: *** Store current command line parameters ***
Jan 31 07:17:12 np0005603654.novalocal dracut[1263]: Stored kernel commandline:
Jan 31 07:17:12 np0005603654.novalocal dracut[1263]: No dracut internal kernel commandline stored in the initramfs
Jan 31 07:17:12 np0005603654.novalocal dracut[1263]: *** Install squash loader ***
Jan 31 07:17:13 np0005603654.novalocal dracut[1263]: *** Squashing the files inside the initramfs ***
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 35 affinity is now unmanaged
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 33 affinity is now unmanaged
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 31 affinity is now unmanaged
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 28 affinity is now unmanaged
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 34 affinity is now unmanaged
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 32 affinity is now unmanaged
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 30 affinity is now unmanaged
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 07:17:13 np0005603654.novalocal irqbalance[797]: IRQ 29 affinity is now unmanaged
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: *** Squashing the files inside the initramfs done ***
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: *** Hardlinking files ***
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: Mode:           real
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: Files:          50
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: Linked:         0 files
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: Compared:       0 xattrs
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: Compared:       0 files
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: Saved:          0 B
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: Duration:       0.000401 seconds
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: *** Hardlinking files done ***
Jan 31 07:17:14 np0005603654.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:17:14 np0005603654.novalocal dracut[1263]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 07:17:15 np0005603654.novalocal kdumpctl[1013]: kdump: kexec: loaded kdump kernel
Jan 31 07:17:15 np0005603654.novalocal kdumpctl[1013]: kdump: Starting kdump: [OK]
Jan 31 07:17:15 np0005603654.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 31 07:17:15 np0005603654.novalocal systemd[1]: Startup finished in 1.226s (kernel) + 2.462s (initrd) + 14.091s (userspace) = 17.780s.
Jan 31 07:17:23 np0005603654.novalocal sshd-session[4298]: Accepted publickey for zuul from 38.102.83.114 port 49928 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 31 07:17:23 np0005603654.novalocal systemd-logind[810]: New session 1 of user zuul.
Jan 31 07:17:23 np0005603654.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 31 07:17:23 np0005603654.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 07:17:23 np0005603654.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 07:17:23 np0005603654.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Queued start job for default target Main User Target.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Created slice User Application Slice.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Reached target Paths.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Reached target Timers.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Starting D-Bus User Message Bus Socket...
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Starting Create User's Volatile Files and Directories...
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Finished Create User's Volatile Files and Directories.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Reached target Sockets.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Reached target Basic System.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Reached target Main User Target.
Jan 31 07:17:23 np0005603654.novalocal systemd[4302]: Startup finished in 108ms.
Jan 31 07:17:23 np0005603654.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 31 07:17:23 np0005603654.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 31 07:17:23 np0005603654.novalocal sshd-session[4298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:17:24 np0005603654.novalocal python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:17:26 np0005603654.novalocal python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:17:32 np0005603654.novalocal python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:17:33 np0005603654.novalocal python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 07:17:34 np0005603654.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:17:35 np0005603654.novalocal python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa5O9lKhcnJXcyffQRpyEvvzFXs/Px07OQcYWlNSQzQy69bFCiHMBf9yPz0H5ta32iRIAsTMZKeaTCDgD46P3uAAgQ4RHyb0gbmYL9hFmPnap5MNBmHSGn9gkZXbgMI07Ed2hkFr2fA0n5+vS4rY6a9wKPKaWbSPbQkENWM/F9wBXFYniJwThSSv2c5WXbnvef0/V7s9qR2HNYV7nYoMPya3mzJfeG47t3476ga3RY/y8mVb+PxBhi//uHmvN0736jSk0OmhKJ2FHrt+s+z3R553U/Uil2UtgtwrAdHpuIwPiJVQXtWGvn2boxBDJG1mtL1A+Wru9BiCfEXo9ZjEGZgVQzqzgnTYCqPswohDmxsvf8e+zDXr7BAFvkfr/JxpQ0TX6Mq5y48h9WwDyanlwZyAl+8Wsz1Th/96Gp4sbLdtQ7LLVMbZwOaeR/0+mPdCuA32yaWHczUraAQm+pfiooRjCQ8H9KX8UwywvMHHQXmW7B4fwxGslaY0bOJD00jZc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:35 np0005603654.novalocal python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:36 np0005603654.novalocal python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:17:36 np0005603654.novalocal python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843855.9812305-207-39451344373726/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=35267d42fcc5446780dcca02218de684_id_rsa follow=False checksum=d9118879c95ba22e4ddb545558b1d4a84e068a6e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:37 np0005603654.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:17:37 np0005603654.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843856.9518955-240-240146273664216/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=35267d42fcc5446780dcca02218de684_id_rsa.pub follow=False checksum=3c6ac15273500854da06ac34e7661f60c187d25f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:38 np0005603654.novalocal python3[4974]: ansible-ping Invoked with data=pong
Jan 31 07:17:39 np0005603654.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:17:41 np0005603654.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 07:17:42 np0005603654.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:42 np0005603654.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:42 np0005603654.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:43 np0005603654.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:43 np0005603654.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:43 np0005603654.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:45 np0005603654.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umqxssyfqisjsgeyyhuoikdvvismecgm ; /usr/bin/python3'
Jan 31 07:17:45 np0005603654.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:45 np0005603654.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:45 np0005603654.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:45 np0005603654.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twaemvekuofqgisjmphosnpoysydgybn ; /usr/bin/python3'
Jan 31 07:17:45 np0005603654.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:45 np0005603654.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:17:45 np0005603654.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:46 np0005603654.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlmxuponequhhsokdmcqdhfyaybkdydl ; /usr/bin/python3'
Jan 31 07:17:46 np0005603654.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:46 np0005603654.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843865.4207375-21-84417884976592/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:46 np0005603654.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:46 np0005603654.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:47 np0005603654.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:47 np0005603654.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:47 np0005603654.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:47 np0005603654.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:48 np0005603654.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:48 np0005603654.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:48 np0005603654.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:48 np0005603654.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:49 np0005603654.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:49 np0005603654.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:49 np0005603654.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:50 np0005603654.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:50 np0005603654.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:50 np0005603654.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:50 np0005603654.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:51 np0005603654.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:51 np0005603654.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:51 np0005603654.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:52 np0005603654.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:52 np0005603654.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:52 np0005603654.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:52 np0005603654.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:53 np0005603654.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:53 np0005603654.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:53 np0005603654.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:17:56 np0005603654.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidugzfeagypwftxxakbzpfbynvzpryf ; /usr/bin/python3'
Jan 31 07:17:56 np0005603654.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:56 np0005603654.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 07:17:56 np0005603654.novalocal systemd[1]: Starting Time & Date Service...
Jan 31 07:17:56 np0005603654.novalocal systemd[1]: Started Time & Date Service.
Jan 31 07:17:56 np0005603654.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Jan 31 07:17:56 np0005603654.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:57 np0005603654.novalocal sudo[6089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqsglvckpqpebwaeucapyeuhugeitswf ; /usr/bin/python3'
Jan 31 07:17:57 np0005603654.novalocal sudo[6089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:57 np0005603654.novalocal python3[6091]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:58 np0005603654.novalocal sudo[6089]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:58 np0005603654.novalocal python3[6167]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:17:58 np0005603654.novalocal python3[6238]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769843878.1921985-153-191976935926530/source _original_basename=tmpyb7iblth follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:59 np0005603654.novalocal python3[6338]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:17:59 np0005603654.novalocal python3[6409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769843879.0172036-183-191918166246311/source _original_basename=tmp5mtfopqg follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:00 np0005603654.novalocal sudo[6509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovvjaeldpgtjuvsttopeglyciwazzhiz ; /usr/bin/python3'
Jan 31 07:18:00 np0005603654.novalocal sudo[6509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:00 np0005603654.novalocal python3[6511]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:18:00 np0005603654.novalocal sudo[6509]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:00 np0005603654.novalocal sudo[6582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icygdavzhpppjvxxpqpahwvywujplewi ; /usr/bin/python3'
Jan 31 07:18:00 np0005603654.novalocal sudo[6582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:00 np0005603654.novalocal python3[6584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769843880.0243-231-241357555801947/source _original_basename=tmp7asv8n3a follow=False checksum=01954034105cdb65b42722894a5c1036808c70c7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:00 np0005603654.novalocal sudo[6582]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:01 np0005603654.novalocal python3[6632]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:01 np0005603654.novalocal python3[6658]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:01 np0005603654.novalocal sudo[6736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgbasjcwvjazkghzxbukxarikfdwslxu ; /usr/bin/python3'
Jan 31 07:18:01 np0005603654.novalocal sudo[6736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:01 np0005603654.novalocal python3[6738]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:18:01 np0005603654.novalocal sudo[6736]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:01 np0005603654.novalocal sudo[6809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdqhntyekglettwfkixfbljhhctesfcr ; /usr/bin/python3'
Jan 31 07:18:02 np0005603654.novalocal sudo[6809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:02 np0005603654.novalocal python3[6811]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843881.6248045-273-124693630813848/source _original_basename=tmpp3qcjkyo follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:02 np0005603654.novalocal sudo[6809]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:02 np0005603654.novalocal sudo[6860]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmmcaqqjyzeloeyhagxciqmjtbxbqqzx ; /usr/bin/python3'
Jan 31 07:18:02 np0005603654.novalocal sudo[6860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:02 np0005603654.novalocal python3[6862]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-e2a1-3900-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:02 np0005603654.novalocal sudo[6860]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:03 np0005603654.novalocal python3[6890]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-e2a1-3900-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 07:18:04 np0005603654.novalocal python3[6918]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:15 np0005603654.novalocal chronyd[826]: Selected source 72.38.129.202 (2.centos.pool.ntp.org)
Jan 31 07:18:24 np0005603654.novalocal sudo[6942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aruodqcofcaujgkdgnlgtbqgccexsdng ; /usr/bin/python3'
Jan 31 07:18:24 np0005603654.novalocal sudo[6942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:24 np0005603654.novalocal python3[6944]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:24 np0005603654.novalocal sudo[6942]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:26 np0005603654.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 07:18:57 np0005603654.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 07:18:57 np0005603654.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8295] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 07:18:57 np0005603654.novalocal systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8458] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8484] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8487] device (eth1): carrier: link connected
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8489] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8493] policy: auto-activating connection 'Wired connection 1' (ac4dec35-f971-3578-8a87-5dd4fcab175b)
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8496] device (eth1): Activation: starting connection 'Wired connection 1' (ac4dec35-f971-3578-8a87-5dd4fcab175b)
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8498] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8501] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8505] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:18:57 np0005603654.novalocal NetworkManager[854]: <info>  [1769843937.8508] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:18:59 np0005603654.novalocal python3[6974]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-ba17-aed8-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:19:08 np0005603654.novalocal sudo[7052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emixmrjnmhfxfqwxkvmomvytevxzrvyj ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:19:08 np0005603654.novalocal sudo[7052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:19:09 np0005603654.novalocal python3[7054]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:19:09 np0005603654.novalocal sudo[7052]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:09 np0005603654.novalocal sudo[7125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhqonlgsbuiekvrvairxbzlyfytcdeew ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:19:09 np0005603654.novalocal sudo[7125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:19:09 np0005603654.novalocal python3[7127]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843948.7711759-102-141858436519870/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=84fe13d8cf25734c7948731435278287c7dfc69c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:19:09 np0005603654.novalocal sudo[7125]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:10 np0005603654.novalocal sudo[7175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukesekyhhstpkweaornhhnzsblnrvwht ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:19:10 np0005603654.novalocal sudo[7175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:19:10 np0005603654.novalocal python3[7177]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3072] caught SIGTERM, shutting down normally.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Stopping Network Manager...
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3079] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3079] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3079] dhcp4 (eth0): state changed no lease
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3082] manager: NetworkManager state is now CONNECTING
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3167] dhcp4 (eth1): canceled DHCP transaction
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3167] dhcp4 (eth1): state changed no lease
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[854]: <info>  [1769843950.3204] exiting (success)
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Stopped Network Manager.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: NetworkManager.service: Consumed 1.222s CPU time, 10.0M memory peak.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Starting Network Manager...
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.3783] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:cfd2689d-5023-49cf-871a-74cb51f0f7c6)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.3785] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.3822] manager[0x556753efc000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Starting Hostname Service...
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Started Hostname Service.
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4831] hostname: hostname: using hostnamed
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4832] hostname: static hostname changed from (none) to "np0005603654.novalocal"
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4839] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4844] manager[0x556753efc000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4844] manager[0x556753efc000]: rfkill: WWAN hardware radio set enabled
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4887] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4888] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4889] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4890] manager: Networking is enabled by state file
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4893] settings: Loaded settings plugin: keyfile (internal)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4899] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4938] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4955] dhcp: init: Using DHCP client 'internal'
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4960] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4967] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4974] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4984] device (lo): Activation: starting connection 'lo' (19c7276e-9b34-4ddb-9414-c336dedfbb59)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4993] device (eth0): carrier: link connected
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.4999] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5009] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5010] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5019] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5029] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5036] device (eth1): carrier: link connected
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5041] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5048] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ac4dec35-f971-3578-8a87-5dd4fcab175b) (indicated)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5048] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5054] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5062] device (eth1): Activation: starting connection 'Wired connection 1' (ac4dec35-f971-3578-8a87-5dd4fcab175b)
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Started Network Manager.
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5074] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5080] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5083] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5085] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5088] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5091] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5094] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5098] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5101] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5112] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5117] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5124] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5128] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5149] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5150] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5155] device (lo): Activation: successful, device activated.
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5162] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5174] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 07:19:10 np0005603654.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5225] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal sudo[7175]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5283] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5285] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5288] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5290] device (eth0): Activation: successful, device activated.
Jan 31 07:19:10 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843950.5295] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 07:19:10 np0005603654.novalocal python3[7261]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-ba17-aed8-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:19:20 np0005603654.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:19:27 np0005603654.novalocal systemd[4302]: Starting Mark boot as successful...
Jan 31 07:19:27 np0005603654.novalocal systemd[4302]: Finished Mark boot as successful.
Jan 31 07:19:40 np0005603654.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.5749] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:19:55 np0005603654.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:19:55 np0005603654.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6060] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6062] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6072] device (eth1): Activation: successful, device activated.
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6080] manager: startup complete
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6082] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <warn>  [1769843995.6090] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6101] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6237] dhcp4 (eth1): canceled DHCP transaction
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6238] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6238] dhcp4 (eth1): state changed no lease
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6262] policy: auto-activating connection 'ci-private-network' (39443809-bacc-53e1-8f0a-bd4718cbb099)
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6272] device (eth1): Activation: starting connection 'ci-private-network' (39443809-bacc-53e1-8f0a-bd4718cbb099)
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6274] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6280] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6290] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6302] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6380] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6382] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:19:55 np0005603654.novalocal NetworkManager[7190]: <info>  [1769843995.6389] device (eth1): Activation: successful, device activated.
Jan 31 07:20:05 np0005603654.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:20:10 np0005603654.novalocal sshd-session[4311]: Received disconnect from 38.102.83.114 port 49928:11: disconnected by user
Jan 31 07:20:10 np0005603654.novalocal sshd-session[4311]: Disconnected from user zuul 38.102.83.114 port 49928
Jan 31 07:20:10 np0005603654.novalocal sshd-session[4298]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:20:10 np0005603654.novalocal systemd-logind[810]: Session 1 logged out. Waiting for processes to exit.
Jan 31 07:20:11 np0005603654.novalocal sshd-session[7290]: Accepted publickey for zuul from 38.102.83.114 port 47822 ssh2: RSA SHA256:7fpkPihK+1pYJj229Mqe0V6aalzFoVGtAbEqTCFuZew
Jan 31 07:20:11 np0005603654.novalocal systemd-logind[810]: New session 3 of user zuul.
Jan 31 07:20:11 np0005603654.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 31 07:20:11 np0005603654.novalocal sshd-session[7290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:20:11 np0005603654.novalocal sudo[7369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzimqnoizsincretwmyudmrrqoffwpoe ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:20:11 np0005603654.novalocal sudo[7369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:11 np0005603654.novalocal python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:20:11 np0005603654.novalocal sudo[7369]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:12 np0005603654.novalocal sudo[7442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axofhzulixyrnnmtfvmwhwmplfscjfoi ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:20:12 np0005603654.novalocal sudo[7442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:12 np0005603654.novalocal python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844011.624704-267-43875188338576/source _original_basename=tmpj5mdodk7 follow=False checksum=3bebef0cea8054543b9a89b7b2bae112a1674a69 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:20:12 np0005603654.novalocal sudo[7442]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:14 np0005603654.novalocal sshd-session[7293]: Connection closed by 38.102.83.114 port 47822
Jan 31 07:20:14 np0005603654.novalocal sshd-session[7290]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:20:14 np0005603654.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 07:20:14 np0005603654.novalocal systemd-logind[810]: Session 3 logged out. Waiting for processes to exit.
Jan 31 07:20:14 np0005603654.novalocal systemd-logind[810]: Removed session 3.
Jan 31 07:22:27 np0005603654.novalocal systemd[4302]: Created slice User Background Tasks Slice.
Jan 31 07:22:27 np0005603654.novalocal systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 07:22:27 np0005603654.novalocal systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 07:28:32 np0005603654.novalocal sshd-session[7474]: Accepted publickey for zuul from 38.102.83.114 port 35594 ssh2: RSA SHA256:7fpkPihK+1pYJj229Mqe0V6aalzFoVGtAbEqTCFuZew
Jan 31 07:28:32 np0005603654.novalocal systemd-logind[810]: New session 4 of user zuul.
Jan 31 07:28:32 np0005603654.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 31 07:28:32 np0005603654.novalocal sshd-session[7474]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:28:32 np0005603654.novalocal sudo[7501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uersdkzkscskmjmcepepshvmcaiwyxtp ; /usr/bin/python3'
Jan 31 07:28:32 np0005603654.novalocal sudo[7501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:32 np0005603654.novalocal python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-cda0-b33d-000000002167-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:28:32 np0005603654.novalocal sudo[7501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:32 np0005603654.novalocal sudo[7530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obeurwuvbglcjrbegbfbmgyasgjynkox ; /usr/bin/python3'
Jan 31 07:28:32 np0005603654.novalocal sudo[7530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:33 np0005603654.novalocal python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:28:33 np0005603654.novalocal sudo[7530]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:33 np0005603654.novalocal sudo[7556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxltdqysohxgjnherkxxyjhjtwlliunl ; /usr/bin/python3'
Jan 31 07:28:33 np0005603654.novalocal sudo[7556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:33 np0005603654.novalocal python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:28:33 np0005603654.novalocal sudo[7556]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:33 np0005603654.novalocal sudo[7582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwmvvztnwjlxxvvpcrznjhmhcrcrpezl ; /usr/bin/python3'
Jan 31 07:28:33 np0005603654.novalocal sudo[7582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:33 np0005603654.novalocal python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:28:33 np0005603654.novalocal sudo[7582]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:33 np0005603654.novalocal sudo[7608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwikqjxguhjwrjfdaqmuooijmbwcjxtq ; /usr/bin/python3'
Jan 31 07:28:33 np0005603654.novalocal sudo[7608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:33 np0005603654.novalocal python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:28:33 np0005603654.novalocal sudo[7608]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:34 np0005603654.novalocal sudo[7634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zizsebtbzyhisdkzewuyfxhwoscmkgnm ; /usr/bin/python3'
Jan 31 07:28:34 np0005603654.novalocal sudo[7634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:34 np0005603654.novalocal python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:28:34 np0005603654.novalocal sudo[7634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:34 np0005603654.novalocal sudo[7712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krrcqexddpmfkwyjcnqjhajlcfzanwjd ; /usr/bin/python3'
Jan 31 07:28:34 np0005603654.novalocal sudo[7712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:34 np0005603654.novalocal python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:28:34 np0005603654.novalocal sudo[7712]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:35 np0005603654.novalocal sudo[7785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckwgfqxabflwjbgxnhvdidzlltmmhyh ; /usr/bin/python3'
Jan 31 07:28:35 np0005603654.novalocal sudo[7785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:35 np0005603654.novalocal python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844514.5020757-497-192748130138093/source _original_basename=tmpg9flo01a follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:28:35 np0005603654.novalocal sudo[7785]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:35 np0005603654.novalocal sudo[7835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdpwatojimxjdppfjownybbuhxvdjjvq ; /usr/bin/python3'
Jan 31 07:28:35 np0005603654.novalocal sudo[7835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:35 np0005603654.novalocal python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:28:35 np0005603654.novalocal systemd[1]: Reloading.
Jan 31 07:28:36 np0005603654.novalocal systemd-rc-local-generator[7854]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:28:36 np0005603654.novalocal sudo[7835]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:37 np0005603654.novalocal sudo[7891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmbensauvbjhotafvqyzuomnyoiyolhl ; /usr/bin/python3'
Jan 31 07:28:37 np0005603654.novalocal sudo[7891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:37 np0005603654.novalocal python3[7893]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 07:28:37 np0005603654.novalocal sudo[7891]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:37 np0005603654.novalocal sudo[7918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzjuuqqydohrwkorslbkdkkdcapardar ; /usr/bin/python3'
Jan 31 07:28:37 np0005603654.novalocal sudo[7918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:38 np0005603654.novalocal python3[7920]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:28:38 np0005603654.novalocal sudo[7918]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:38 np0005603654.novalocal sudo[7946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttkyvpyhjysaxlecyyyxinknytksrprx ; /usr/bin/python3'
Jan 31 07:28:38 np0005603654.novalocal sudo[7946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:38 np0005603654.novalocal python3[7948]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:28:38 np0005603654.novalocal sudo[7946]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:38 np0005603654.novalocal sudo[7974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyyvpqeypuekgexgqgiuloxqzujbkdnb ; /usr/bin/python3'
Jan 31 07:28:38 np0005603654.novalocal sudo[7974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:38 np0005603654.novalocal python3[7976]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:28:38 np0005603654.novalocal sudo[7974]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:38 np0005603654.novalocal sudo[8002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptffvfsamklzgtxwtvyhjcjovtbgggia ; /usr/bin/python3'
Jan 31 07:28:38 np0005603654.novalocal sudo[8002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:38 np0005603654.novalocal python3[8004]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:28:38 np0005603654.novalocal sudo[8002]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:39 np0005603654.novalocal python3[8031]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-cda0-b33d-00000000216e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:28:39 np0005603654.novalocal python3[8061]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:28:41 np0005603654.novalocal sshd-session[7477]: Connection closed by 38.102.83.114 port 35594
Jan 31 07:28:41 np0005603654.novalocal sshd-session[7474]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:28:41 np0005603654.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 07:28:41 np0005603654.novalocal systemd[1]: session-4.scope: Consumed 3.481s CPU time.
Jan 31 07:28:41 np0005603654.novalocal systemd-logind[810]: Session 4 logged out. Waiting for processes to exit.
Jan 31 07:28:41 np0005603654.novalocal systemd-logind[810]: Removed session 4.
Jan 31 07:28:43 np0005603654.novalocal sshd-session[8065]: Accepted publickey for zuul from 38.102.83.114 port 34294 ssh2: RSA SHA256:7fpkPihK+1pYJj229Mqe0V6aalzFoVGtAbEqTCFuZew
Jan 31 07:28:43 np0005603654.novalocal systemd-logind[810]: New session 5 of user zuul.
Jan 31 07:28:43 np0005603654.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 31 07:28:43 np0005603654.novalocal sshd-session[8065]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:28:43 np0005603654.novalocal sudo[8092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daicdpforgcuiafbfynvemnsefntewvy ; /usr/bin/python3'
Jan 31 07:28:43 np0005603654.novalocal sudo[8092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:28:43 np0005603654.novalocal python3[8094]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:28:50 np0005603654.novalocal setsebool[8136]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 07:28:50 np0005603654.novalocal setsebool[8136]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 07:29:02 np0005603654.novalocal chronyd[826]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:29:02 np0005603654.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:29:12 np0005603654.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:29:30 np0005603654.novalocal dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 07:29:31 np0005603654.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:29:31 np0005603654.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:29:31 np0005603654.novalocal systemd[1]: Reloading.
Jan 31 07:29:31 np0005603654.novalocal systemd-rc-local-generator[8902]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:29:31 np0005603654.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:29:32 np0005603654.novalocal sudo[8092]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:38 np0005603654.novalocal python3[15132]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-a7da-6432-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:29:39 np0005603654.novalocal kernel: evm: overlay not supported
Jan 31 07:29:39 np0005603654.novalocal systemd[4302]: Starting D-Bus User Message Bus...
Jan 31 07:29:39 np0005603654.novalocal dbus-broker-launch[15880]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 07:29:39 np0005603654.novalocal dbus-broker-launch[15880]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 07:29:39 np0005603654.novalocal systemd[4302]: Started D-Bus User Message Bus.
Jan 31 07:29:39 np0005603654.novalocal dbus-broker-lau[15880]: Ready
Jan 31 07:29:39 np0005603654.novalocal systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 07:29:39 np0005603654.novalocal systemd[4302]: Created slice Slice /user.
Jan 31 07:29:39 np0005603654.novalocal systemd[4302]: podman-15776.scope: unit configures an IP firewall, but not running as root.
Jan 31 07:29:39 np0005603654.novalocal systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 07:29:39 np0005603654.novalocal systemd[4302]: Started podman-15776.scope.
Jan 31 07:29:40 np0005603654.novalocal systemd[4302]: Started podman-pause-3c9fc33a.scope.
Jan 31 07:29:40 np0005603654.novalocal sudo[16557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snikboekxeuljmlqwffpwwdtlzhcegmj ; /usr/bin/python3'
Jan 31 07:29:40 np0005603654.novalocal sudo[16557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:29:40 np0005603654.novalocal python3[16569]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.217:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.217:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:29:40 np0005603654.novalocal python3[16569]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 07:29:40 np0005603654.novalocal sudo[16557]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:41 np0005603654.novalocal sshd-session[8068]: Connection closed by 38.102.83.114 port 34294
Jan 31 07:29:41 np0005603654.novalocal sshd-session[8065]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:29:41 np0005603654.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 07:29:41 np0005603654.novalocal systemd[1]: session-5.scope: Consumed 41.517s CPU time.
Jan 31 07:29:41 np0005603654.novalocal systemd-logind[810]: Session 5 logged out. Waiting for processes to exit.
Jan 31 07:29:41 np0005603654.novalocal systemd-logind[810]: Removed session 5.
Jan 31 07:30:01 np0005603654.novalocal sshd-session[27398]: Connection closed by 38.102.83.129 port 57954 [preauth]
Jan 31 07:30:01 np0005603654.novalocal sshd-session[27406]: Connection closed by 38.102.83.129 port 57956 [preauth]
Jan 31 07:30:01 np0005603654.novalocal sshd-session[27403]: Unable to negotiate with 38.102.83.129 port 57968: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 07:30:01 np0005603654.novalocal sshd-session[27402]: Unable to negotiate with 38.102.83.129 port 57972: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 07:30:01 np0005603654.novalocal sshd-session[27400]: Unable to negotiate with 38.102.83.129 port 57974: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 07:30:04 np0005603654.novalocal sshd-session[29118]: Accepted publickey for zuul from 38.102.83.114 port 48766 ssh2: RSA SHA256:7fpkPihK+1pYJj229Mqe0V6aalzFoVGtAbEqTCFuZew
Jan 31 07:30:05 np0005603654.novalocal systemd-logind[810]: New session 6 of user zuul.
Jan 31 07:30:05 np0005603654.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 31 07:30:05 np0005603654.novalocal sshd-session[29118]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:30:05 np0005603654.novalocal python3[29217]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJv9VdejfO/LHU1/VApMA/Nr9YQBFG2x65P4YRrBoIyDrBbCsMsfdCsD+azzS5JYCp2R5DH1Cbs9NMEX5XCt+tA= zuul@np0005603653.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:30:05 np0005603654.novalocal sudo[29379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-serboyjocnrnqodyiwmjynumykcxwsip ; /usr/bin/python3'
Jan 31 07:30:05 np0005603654.novalocal sudo[29379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:30:05 np0005603654.novalocal python3[29385]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJv9VdejfO/LHU1/VApMA/Nr9YQBFG2x65P4YRrBoIyDrBbCsMsfdCsD+azzS5JYCp2R5DH1Cbs9NMEX5XCt+tA= zuul@np0005603653.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:30:05 np0005603654.novalocal sudo[29379]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:06 np0005603654.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:30:06 np0005603654.novalocal systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:30:06 np0005603654.novalocal systemd[1]: man-db-cache-update.service: Consumed 37.448s CPU time.
Jan 31 07:30:06 np0005603654.novalocal systemd[1]: run-r3145a6914cea408a97951e5cb4691873.service: Deactivated successfully.
Jan 31 07:30:06 np0005603654.novalocal sudo[29725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pufvcfgkxnjxhnhrfrrzjdahepulnmml ; /usr/bin/python3'
Jan 31 07:30:06 np0005603654.novalocal sudo[29725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:30:06 np0005603654.novalocal python3[29727]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603654.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 07:30:06 np0005603654.novalocal useradd[29729]: new group: name=cloud-admin, GID=1002
Jan 31 07:30:06 np0005603654.novalocal useradd[29729]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 31 07:30:06 np0005603654.novalocal sudo[29725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:06 np0005603654.novalocal sudo[29759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlfmgwpdzuihgwcnrxvfqnlwtyffgndt ; /usr/bin/python3'
Jan 31 07:30:06 np0005603654.novalocal sudo[29759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:30:06 np0005603654.novalocal python3[29761]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJv9VdejfO/LHU1/VApMA/Nr9YQBFG2x65P4YRrBoIyDrBbCsMsfdCsD+azzS5JYCp2R5DH1Cbs9NMEX5XCt+tA= zuul@np0005603653.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:30:06 np0005603654.novalocal sudo[29759]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:07 np0005603654.novalocal sudo[29837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmldyeeyavavnzfrlbnjzfiorylzjdtt ; /usr/bin/python3'
Jan 31 07:30:07 np0005603654.novalocal sudo[29837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:30:07 np0005603654.novalocal python3[29839]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:30:07 np0005603654.novalocal sudo[29837]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:07 np0005603654.novalocal sudo[29910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyvimdospfdetaycqcfvnyikkoarehti ; /usr/bin/python3'
Jan 31 07:30:07 np0005603654.novalocal sudo[29910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:30:07 np0005603654.novalocal python3[29912]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844606.9330323-135-263724419359631/source _original_basename=tmpvc3m4ocm follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:30:07 np0005603654.novalocal sudo[29910]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:08 np0005603654.novalocal sudo[29960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulazcbhmbojtusvrhsogappoybtzyxis ; /usr/bin/python3'
Jan 31 07:30:08 np0005603654.novalocal sudo[29960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:30:08 np0005603654.novalocal python3[29962]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 07:30:08 np0005603654.novalocal systemd[1]: Starting Hostname Service...
Jan 31 07:30:08 np0005603654.novalocal systemd[1]: Started Hostname Service.
Jan 31 07:30:08 np0005603654.novalocal systemd-hostnamed[29966]: Changed pretty hostname to 'compute-0'
Jan 31 07:30:08 compute-0 systemd-hostnamed[29966]: Hostname set to <compute-0> (static)
Jan 31 07:30:08 compute-0 NetworkManager[7190]: <info>  [1769844608.4515] hostname: static hostname changed from "np0005603654.novalocal" to "compute-0"
Jan 31 07:30:08 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:30:08 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:30:08 compute-0 sudo[29960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:08 compute-0 sshd-session[29163]: Connection closed by 38.102.83.114 port 48766
Jan 31 07:30:08 compute-0 sshd-session[29118]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:30:08 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 07:30:08 compute-0 systemd[1]: session-6.scope: Consumed 1.892s CPU time.
Jan 31 07:30:08 compute-0 systemd-logind[810]: Session 6 logged out. Waiting for processes to exit.
Jan 31 07:30:08 compute-0 systemd-logind[810]: Removed session 6.
Jan 31 07:30:18 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:30:38 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:32:27 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 07:32:27 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 07:32:27 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 07:32:27 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 07:33:35 compute-0 sshd-session[29990]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:33:35 compute-0 sshd-session[29990]: Connection reset by 176.120.22.52 port 9887
Jan 31 07:33:55 compute-0 sshd-session[29991]: Accepted publickey for zuul from 38.102.83.129 port 48818 ssh2: RSA SHA256:7fpkPihK+1pYJj229Mqe0V6aalzFoVGtAbEqTCFuZew
Jan 31 07:33:55 compute-0 systemd-logind[810]: New session 7 of user zuul.
Jan 31 07:33:55 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 31 07:33:55 compute-0 sshd-session[29991]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:33:56 compute-0 python3[30067]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:33:57 compute-0 sudo[30181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrwhujgrlhscfxyfyllypxqsbbxiftcc ; /usr/bin/python3'
Jan 31 07:33:57 compute-0 sudo[30181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:57 compute-0 python3[30183]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:33:57 compute-0 sudo[30181]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:57 compute-0 sudo[30254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afpfezmslvxcshclvxzomweafikrtgji ; /usr/bin/python3'
Jan 31 07:33:57 compute-0 sudo[30254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:57 compute-0 python3[30256]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769844837.3067558-33782-119634545389814/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:33:57 compute-0 sudo[30254]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:58 compute-0 sudo[30280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxynicrinoqmegicvuhlfzoltpytvml ; /usr/bin/python3'
Jan 31 07:33:58 compute-0 sudo[30280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:58 compute-0 python3[30282]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:33:58 compute-0 sudo[30280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:58 compute-0 sudo[30353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwwgamezmucucrcjdnwincqfxtkmypgv ; /usr/bin/python3'
Jan 31 07:33:58 compute-0 sudo[30353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:58 compute-0 python3[30355]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769844837.3067558-33782-119634545389814/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:33:58 compute-0 sudo[30353]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:58 compute-0 sudo[30379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugrkhtoiexhxbxmxxsdgfoepsfguogfh ; /usr/bin/python3'
Jan 31 07:33:58 compute-0 sudo[30379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:58 compute-0 python3[30381]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:33:58 compute-0 sudo[30379]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:58 compute-0 sudo[30452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifztlgejznrzwpirlrgkgofgiyqthsix ; /usr/bin/python3'
Jan 31 07:33:58 compute-0 sudo[30452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:59 compute-0 python3[30454]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769844837.3067558-33782-119634545389814/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:33:59 compute-0 sudo[30452]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:59 compute-0 sudo[30478]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfcjakdbddgpzhlmvftjurdnyeztvoin ; /usr/bin/python3'
Jan 31 07:33:59 compute-0 sudo[30478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:59 compute-0 python3[30480]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:33:59 compute-0 sudo[30478]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:59 compute-0 sudo[30551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zevsgqukpfosttgmdjyruhevhfhjpmpr ; /usr/bin/python3'
Jan 31 07:33:59 compute-0 sudo[30551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:59 compute-0 python3[30553]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769844837.3067558-33782-119634545389814/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:33:59 compute-0 sudo[30551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:59 compute-0 sudo[30577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vttgyraalgifmchmsivtnjgfpxjjcdnf ; /usr/bin/python3'
Jan 31 07:33:59 compute-0 sudo[30577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:59 compute-0 python3[30579]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:33:59 compute-0 sudo[30577]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:59 compute-0 sudo[30650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neyateunguhgdkepunjshklcrhelocxk ; /usr/bin/python3'
Jan 31 07:33:59 compute-0 sudo[30650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:00 compute-0 python3[30652]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769844837.3067558-33782-119634545389814/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:34:00 compute-0 sudo[30650]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:00 compute-0 sudo[30676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnnjjrceaqrdrycfjzmsqmerkqqabgtl ; /usr/bin/python3'
Jan 31 07:34:00 compute-0 sudo[30676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:00 compute-0 python3[30678]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:34:00 compute-0 sudo[30676]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:00 compute-0 sudo[30749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkzfaeoxioscfuarzjbyquzlvhhqipl ; /usr/bin/python3'
Jan 31 07:34:00 compute-0 sudo[30749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:00 compute-0 python3[30751]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769844837.3067558-33782-119634545389814/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:34:00 compute-0 sudo[30749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:00 compute-0 sudo[30775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqkhxcprvmlufauwjrbrrtkgofnoqqii ; /usr/bin/python3'
Jan 31 07:34:00 compute-0 sudo[30775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:00 compute-0 python3[30777]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:34:00 compute-0 sudo[30775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:01 compute-0 sudo[30848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aalbpvioasatltdnyxxmfgiiwwohokkj ; /usr/bin/python3'
Jan 31 07:34:01 compute-0 sudo[30848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:01 compute-0 python3[30850]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769844837.3067558-33782-119634545389814/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:34:01 compute-0 sudo[30848]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:03 compute-0 sshd-session[30875]: Connection closed by 192.168.122.11 port 35442 [preauth]
Jan 31 07:34:03 compute-0 sshd-session[30876]: Connection closed by 192.168.122.11 port 35444 [preauth]
Jan 31 07:34:03 compute-0 sshd-session[30877]: Unable to negotiate with 192.168.122.11 port 35456: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 07:34:03 compute-0 sshd-session[30878]: Unable to negotiate with 192.168.122.11 port 35460: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 07:34:03 compute-0 sshd-session[30879]: Unable to negotiate with 192.168.122.11 port 35462: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 07:34:15 compute-0 python3[30909]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:39:14 compute-0 sshd-session[29994]: Received disconnect from 38.102.83.129 port 48818:11: disconnected by user
Jan 31 07:39:14 compute-0 sshd-session[29994]: Disconnected from user zuul 38.102.83.129 port 48818
Jan 31 07:39:14 compute-0 sshd-session[29991]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:39:14 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 07:39:14 compute-0 systemd[1]: session-7.scope: Consumed 4.220s CPU time.
Jan 31 07:39:14 compute-0 systemd-logind[810]: Session 7 logged out. Waiting for processes to exit.
Jan 31 07:39:14 compute-0 systemd-logind[810]: Removed session 7.
Jan 31 07:39:48 compute-0 chronyd[826]: Selected source 72.38.129.202 (2.centos.pool.ntp.org)
Jan 31 07:44:42 compute-0 sshd-session[30915]: Connection closed by 176.65.134.22 port 34818
Jan 31 07:44:45 compute-0 sshd-session[30916]: Invalid user HONEYYYFAGGOT from 176.65.134.22 port 48518
Jan 31 07:44:46 compute-0 sshd-session[30916]: Connection closed by invalid user HONEYYYFAGGOT 176.65.134.22 port 48518 [preauth]
Jan 31 07:44:50 compute-0 sshd-session[30918]: Connection closed by authenticating user root 176.65.134.22 port 35070 [preauth]
Jan 31 07:44:53 compute-0 sshd-session[30920]: Invalid user admin from 176.65.134.22 port 50266
Jan 31 07:44:54 compute-0 sshd-session[30920]: Connection closed by invalid user admin 176.65.134.22 port 50266 [preauth]
Jan 31 07:44:59 compute-0 sshd-session[30922]: Connection closed by authenticating user root 176.65.134.22 port 54066 [preauth]
Jan 31 07:45:03 compute-0 sshd-session[30926]: Connection closed by 45.148.10.121 port 56268 [preauth]
Jan 31 07:45:04 compute-0 sshd-session[30924]: Invalid user admin from 176.65.134.22 port 39116
Jan 31 07:45:05 compute-0 sshd-session[30924]: Connection closed by invalid user admin 176.65.134.22 port 39116 [preauth]
Jan 31 07:45:10 compute-0 sshd-session[30928]: Connection closed by authenticating user root 176.65.134.22 port 58882 [preauth]
Jan 31 07:45:15 compute-0 sshd-session[30930]: Invalid user admin from 176.65.134.22 port 46024
Jan 31 07:45:16 compute-0 sshd-session[30930]: Connection closed by invalid user admin 176.65.134.22 port 46024 [preauth]
Jan 31 07:45:25 compute-0 sshd-session[30932]: Connection closed by authenticating user root 176.65.134.22 port 50280 [preauth]
Jan 31 07:45:31 compute-0 sshd-session[30935]: Invalid user admin from 176.65.134.22 port 53966
Jan 31 07:45:32 compute-0 sshd-session[30935]: Connection closed by invalid user admin 176.65.134.22 port 53966 [preauth]
Jan 31 07:45:41 compute-0 sshd-session[30937]: Connection closed by authenticating user root 176.65.134.22 port 45218 [preauth]
Jan 31 07:45:49 compute-0 sshd-session[30939]: Invalid user admin from 176.65.134.22 port 46720
Jan 31 07:45:51 compute-0 sshd-session[30939]: Connection closed by invalid user admin 176.65.134.22 port 46720 [preauth]
Jan 31 07:46:02 compute-0 sshd-session[30941]: Connection closed by authenticating user root 176.65.134.22 port 36140 [preauth]
Jan 31 07:46:09 compute-0 sshd-session[30943]: Invalid user admin from 176.65.134.22 port 40564
Jan 31 07:46:12 compute-0 sshd-session[30943]: Connection closed by invalid user admin 176.65.134.22 port 40564 [preauth]
Jan 31 07:46:21 compute-0 sshd-session[30945]: Invalid user support from 176.65.134.22 port 33570
Jan 31 07:46:23 compute-0 sshd-session[30945]: Connection closed by invalid user support 176.65.134.22 port 33570 [preauth]
Jan 31 07:46:32 compute-0 sshd-session[30947]: Invalid user service from 176.65.134.22 port 39456
Jan 31 07:46:34 compute-0 sshd-session[30947]: Connection closed by invalid user service 176.65.134.22 port 39456 [preauth]
Jan 31 07:46:43 compute-0 sshd-session[30949]: Invalid user guest from 176.65.134.22 port 35632
Jan 31 07:46:45 compute-0 sshd-session[30949]: Connection closed by invalid user guest 176.65.134.22 port 35632 [preauth]
Jan 31 07:46:55 compute-0 sshd-session[30952]: Invalid user user from 176.65.134.22 port 59300
Jan 31 07:46:59 compute-0 sshd-session[30952]: Connection closed by invalid user user 176.65.134.22 port 59300 [preauth]
Jan 31 07:47:08 compute-0 sshd-session[30954]: Invalid user test from 176.65.134.22 port 60894
Jan 31 07:47:11 compute-0 sshd-session[30954]: Connection closed by invalid user test 176.65.134.22 port 60894 [preauth]
Jan 31 07:47:21 compute-0 sshd-session[30957]: Invalid user pi from 176.65.134.22 port 49534
Jan 31 07:47:21 compute-0 sshd-session[30959]: Accepted publickey for zuul from 192.168.122.30 port 59876 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:47:21 compute-0 systemd-logind[810]: New session 8 of user zuul.
Jan 31 07:47:21 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 31 07:47:21 compute-0 sshd-session[30959]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:47:22 compute-0 python3.9[31112]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:47:23 compute-0 sshd-session[30957]: Connection closed by invalid user pi 176.65.134.22 port 49534 [preauth]
Jan 31 07:47:23 compute-0 sudo[31292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmbufhwqvbckjuptcactdresjhddiyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845643.4497957-27-107294473209436/AnsiballZ_command.py'
Jan 31 07:47:23 compute-0 sudo[31292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:47:24 compute-0 python3.9[31294]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:47:30 compute-0 sudo[31292]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:30 compute-0 sshd-session[30962]: Connection closed by 192.168.122.30 port 59876
Jan 31 07:47:30 compute-0 sshd-session[30959]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:47:30 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 07:47:30 compute-0 systemd[1]: session-8.scope: Consumed 7.185s CPU time.
Jan 31 07:47:30 compute-0 systemd-logind[810]: Session 8 logged out. Waiting for processes to exit.
Jan 31 07:47:30 compute-0 systemd-logind[810]: Removed session 8.
Jan 31 07:47:33 compute-0 sshd-session[31166]: Invalid user ubuntu from 176.65.134.22 port 50700
Jan 31 07:47:35 compute-0 sshd-session[31166]: Connection closed by invalid user ubuntu 176.65.134.22 port 50700 [preauth]
Jan 31 07:47:38 compute-0 sshd-session[31353]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:47:38 compute-0 sshd-session[31353]: Connection reset by 176.65.134.22 port 34300
Jan 31 07:47:41 compute-0 sshd-session[31354]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:47:41 compute-0 sshd-session[31354]: Connection reset by 176.65.134.22 port 52166
Jan 31 07:47:52 compute-0 sshd-session[31355]: Invalid user postgres from 176.65.134.22 port 47532
Jan 31 07:47:54 compute-0 sshd-session[31355]: Connection closed by invalid user postgres 176.65.134.22 port 47532 [preauth]
Jan 31 07:48:02 compute-0 sshd-session[31359]: Accepted publickey for zuul from 192.168.122.30 port 56560 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:48:02 compute-0 systemd-logind[810]: New session 9 of user zuul.
Jan 31 07:48:02 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 31 07:48:02 compute-0 sshd-session[31359]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:48:03 compute-0 python3.9[31512]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 07:48:04 compute-0 python3.9[31686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:48:05 compute-0 sudo[31836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szdfizbvrtrivjxnnvamyurevcttzmrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845684.8545291-40-109877806913306/AnsiballZ_command.py'
Jan 31 07:48:05 compute-0 sudo[31836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:05 compute-0 python3.9[31838]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:48:05 compute-0 sudo[31836]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:05 compute-0 sshd-session[31357]: Invalid user mysql from 176.65.134.22 port 34416
Jan 31 07:48:06 compute-0 sudo[31989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etlvaqnfqzydikaxtmykjhueskspekzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845685.758987-52-225385191698898/AnsiballZ_stat.py'
Jan 31 07:48:06 compute-0 sudo[31989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:06 compute-0 python3.9[31991]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:48:06 compute-0 sudo[31989]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:06 compute-0 sudo[32141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqljybjoknjresewybdlhkfivscjwxnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845686.529444-60-253409168455184/AnsiballZ_file.py'
Jan 31 07:48:06 compute-0 sudo[32141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:07 compute-0 python3.9[32143]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:48:07 compute-0 sudo[32141]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:07 compute-0 sudo[32293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suankutqdjsbgicakqjqgwzwtkvlihzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845687.2363997-68-109925408583511/AnsiballZ_stat.py'
Jan 31 07:48:07 compute-0 sudo[32293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:07 compute-0 python3.9[32295]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:48:07 compute-0 sudo[32293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:08 compute-0 sshd-session[31357]: Connection closed by invalid user mysql 176.65.134.22 port 34416 [preauth]
Jan 31 07:48:08 compute-0 sudo[32416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlrpaswdodbhsfmxjvvzvegwvrniwedm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845687.2363997-68-109925408583511/AnsiballZ_copy.py'
Jan 31 07:48:08 compute-0 sudo[32416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:08 compute-0 python3.9[32418]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845687.2363997-68-109925408583511/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:48:08 compute-0 sudo[32416]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:08 compute-0 sudo[32569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hypuaadthgsxbzafhwswdbcpyydxoubd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845688.6787856-83-22696765954133/AnsiballZ_setup.py'
Jan 31 07:48:08 compute-0 sudo[32569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:09 compute-0 python3.9[32571]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:48:09 compute-0 sudo[32569]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:09 compute-0 sudo[32725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsvzajexrxtfypxnobzmmjsegdfsbuux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845689.682199-91-206172224073865/AnsiballZ_file.py'
Jan 31 07:48:09 compute-0 sudo[32725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:10 compute-0 python3.9[32727]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:48:10 compute-0 sudo[32725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:10 compute-0 sudo[32877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbaehyhqammmbikbznlknbqzgymwjnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845690.2705286-100-257990487574280/AnsiballZ_file.py'
Jan 31 07:48:10 compute-0 sudo[32877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:10 compute-0 python3.9[32879]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:48:10 compute-0 sudo[32877]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:11 compute-0 sshd-session[32419]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:48:11 compute-0 sshd-session[32419]: Connection reset by 176.65.134.22 port 46668
Jan 31 07:48:11 compute-0 python3.9[33030]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:48:13 compute-0 python3.9[33283]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:48:14 compute-0 sshd-session[32987]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:48:14 compute-0 sshd-session[32987]: Connection reset by 176.65.134.22 port 51974
Jan 31 07:48:14 compute-0 python3.9[33433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:48:15 compute-0 python3.9[33588]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:48:16 compute-0 sudo[33745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxcnqhwiyjqeguxccuincrvacisfcfrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845695.8957305-148-4312885796586/AnsiballZ_setup.py'
Jan 31 07:48:16 compute-0 sudo[33745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:16 compute-0 python3.9[33747]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:48:16 compute-0 sudo[33745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:16 compute-0 sudo[33829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hijhpxcxkfckwhdoyillcjwtipwrkaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845695.8957305-148-4312885796586/AnsiballZ_dnf.py'
Jan 31 07:48:16 compute-0 sudo[33829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:48:17 compute-0 python3.9[33831]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:48:25 compute-0 sshd-session[33434]: Invalid user juniper from 176.65.134.22 port 50510
Jan 31 07:48:27 compute-0 sshd-session[33434]: Connection closed by invalid user juniper 176.65.134.22 port 50510 [preauth]
Jan 31 07:48:39 compute-0 sshd-session[33905]: Invalid user dlink from 176.65.134.22 port 32944
Jan 31 07:48:41 compute-0 sshd-session[33905]: Connection closed by invalid user dlink 176.65.134.22 port 32944 [preauth]
Jan 31 07:48:44 compute-0 sshd-session[33973]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:48:44 compute-0 sshd-session[33973]: Connection reset by 176.65.134.22 port 60998
Jan 31 07:48:47 compute-0 sshd-session[33979]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:48:47 compute-0 sshd-session[33979]: Connection reset by 176.65.134.22 port 44504
Jan 31 07:48:59 compute-0 sshd-session[33980]: Invalid user default from 176.65.134.22 port 37856
Jan 31 07:49:00 compute-0 systemd[1]: Reloading.
Jan 31 07:49:00 compute-0 systemd-rc-local-generator[34032]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:49:00 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 07:49:00 compute-0 systemd[1]: Reloading.
Jan 31 07:49:00 compute-0 systemd-rc-local-generator[34079]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:49:01 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 07:49:01 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 07:49:01 compute-0 systemd[1]: Reloading.
Jan 31 07:49:01 compute-0 systemd-rc-local-generator[34121]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:49:01 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 07:49:01 compute-0 dbus-broker-launch[786]: Noticed file-system modification, trigger reload.
Jan 31 07:49:01 compute-0 dbus-broker-launch[786]: Noticed file-system modification, trigger reload.
Jan 31 07:49:01 compute-0 sshd-session[33980]: Connection closed by invalid user default 176.65.134.22 port 37856 [preauth]
Jan 31 07:49:15 compute-0 sshd-session[34140]: Connection closed by authenticating user root 176.65.134.22 port 42304 [preauth]
Jan 31 07:49:18 compute-0 sshd-session[34191]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:49:18 compute-0 sshd-session[34191]: Connection reset by 176.65.134.22 port 44008
Jan 31 07:49:21 compute-0 sshd-session[34204]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:49:21 compute-0 sshd-session[34204]: Connection reset by 176.65.134.22 port 53080
Jan 31 07:49:35 compute-0 sshd-session[34219]: Connection closed by authenticating user root 176.65.134.22 port 57814 [preauth]
Jan 31 07:49:49 compute-0 sshd-session[34269]: Connection closed by authenticating user root 176.65.134.22 port 52776 [preauth]
Jan 31 07:49:52 compute-0 sshd-session[34296]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:49:52 compute-0 sshd-session[34296]: Connection reset by 176.65.134.22 port 36736
Jan 31 07:49:55 compute-0 sshd-session[34301]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:49:55 compute-0 sshd-session[34301]: Connection reset by 176.65.134.22 port 49366
Jan 31 07:50:04 compute-0 kernel: SELinux:  Converting 2726 SID table entries...
Jan 31 07:50:04 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:50:04 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:50:04 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:50:04 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:50:04 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:50:04 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:50:04 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:50:04 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 07:50:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:50:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:50:05 compute-0 systemd[1]: Reloading.
Jan 31 07:50:05 compute-0 systemd-rc-local-generator[34445]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:50:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:50:05 compute-0 sudo[33829]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:50:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:50:06 compute-0 systemd[1]: run-r6dbfe54c89de4bc397ae1de35e93bc9c.service: Deactivated successfully.
Jan 31 07:50:06 compute-0 sudo[35361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntnfqzxhkzljpfauqwzfzcgrlubfbzqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845805.8494098-160-133359852051560/AnsiballZ_command.py'
Jan 31 07:50:06 compute-0 sudo[35361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:06 compute-0 python3.9[35363]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:50:08 compute-0 sudo[35361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:09 compute-0 sudo[35642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksbifyvnwspsvskxgebrczmwvhqlebsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845808.5078213-168-80550922320277/AnsiballZ_selinux.py'
Jan 31 07:50:09 compute-0 sudo[35642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:09 compute-0 sshd-session[34331]: Connection closed by authenticating user root 176.65.134.22 port 40068 [preauth]
Jan 31 07:50:09 compute-0 python3.9[35644]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 07:50:09 compute-0 sudo[35642]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:10 compute-0 sudo[35795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdckcydxtxhiqeblvyoluczxsakjiwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845809.8007407-179-223765843142881/AnsiballZ_command.py'
Jan 31 07:50:10 compute-0 sudo[35795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:10 compute-0 python3.9[35797]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 07:50:10 compute-0 sudo[35795]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:11 compute-0 sudo[35948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlmrsdsfsrhutnrfdlsriuucxtdiwfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845811.039237-187-224493751499290/AnsiballZ_file.py'
Jan 31 07:50:11 compute-0 sudo[35948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:13 compute-0 python3.9[35950]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:50:13 compute-0 sudo[35948]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:14 compute-0 sudo[36101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pizynoquxbllivgcvvzkymcoszarindt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845814.0678945-195-169414145483342/AnsiballZ_mount.py'
Jan 31 07:50:14 compute-0 sudo[36101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:14 compute-0 python3.9[36103]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 07:50:14 compute-0 sudo[36101]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:15 compute-0 sudo[36253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kndywriifjnxedjnzdcyrebtgeoeofue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845815.7161312-223-83403361662108/AnsiballZ_file.py'
Jan 31 07:50:15 compute-0 sudo[36253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:18 compute-0 python3.9[36255]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:50:18 compute-0 sudo[36253]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:19 compute-0 sudo[36406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xefbhgzkzlgcibgenwdvqujnboibvpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845819.0275362-231-245612626711673/AnsiballZ_stat.py'
Jan 31 07:50:19 compute-0 sudo[36406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:19 compute-0 python3.9[36408]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:50:19 compute-0 sudo[36406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:20 compute-0 sudo[36529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssjrqooiemckuzsexpmwcifuremtqdxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845819.0275362-231-245612626711673/AnsiballZ_copy.py'
Jan 31 07:50:20 compute-0 sudo[36529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:20 compute-0 python3.9[36531]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769845819.0275362-231-245612626711673/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ff538dbcfa65d2a0e72b63d2920a0809a609b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:50:20 compute-0 sudo[36529]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:21 compute-0 sudo[36681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luiucpedeiyqvraqqvzrzxcqptnchiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845821.0153022-255-163021042802655/AnsiballZ_stat.py'
Jan 31 07:50:21 compute-0 sudo[36681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:21 compute-0 python3.9[36683]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:50:21 compute-0 sudo[36681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:21 compute-0 sudo[36833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwgwuncefmefnpcrhcrinpbixtgfjpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845821.601717-263-255688335673462/AnsiballZ_command.py'
Jan 31 07:50:21 compute-0 sudo[36833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:21 compute-0 python3.9[36835]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:50:22 compute-0 sudo[36833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:22 compute-0 sudo[36986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dklldmqeiyrzyybwjhateuukxoppdiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845822.181714-271-157157540149415/AnsiballZ_file.py'
Jan 31 07:50:22 compute-0 sudo[36986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:22 compute-0 python3.9[36988]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:50:22 compute-0 sudo[36986]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:23 compute-0 sudo[37138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvwpmygduyendoyrjdurgevlwbdmnnok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845822.950081-282-3265786605919/AnsiballZ_getent.py'
Jan 31 07:50:23 compute-0 sudo[37138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:23 compute-0 python3.9[37140]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 07:50:23 compute-0 sudo[37138]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:23 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:50:23 compute-0 sshd-session[35645]: Connection closed by authenticating user root 176.65.134.22 port 35646 [preauth]
Jan 31 07:50:24 compute-0 sudo[37293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwdcpepzxihuxlwmurbhvltzgpcvhmyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845823.637857-290-92792692837277/AnsiballZ_group.py'
Jan 31 07:50:24 compute-0 sudo[37293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:24 compute-0 python3.9[37295]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:50:24 compute-0 groupadd[37296]: group added to /etc/group: name=qemu, GID=107
Jan 31 07:50:24 compute-0 groupadd[37296]: group added to /etc/gshadow: name=qemu
Jan 31 07:50:24 compute-0 groupadd[37296]: new group: name=qemu, GID=107
Jan 31 07:50:24 compute-0 sudo[37293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:24 compute-0 sudo[37451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzpfxrhgxkojlqpeywfdoewirptnsaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845824.4450526-298-40942739649450/AnsiballZ_user.py'
Jan 31 07:50:24 compute-0 sudo[37451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:25 compute-0 python3.9[37453]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:50:25 compute-0 useradd[37455]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 07:50:25 compute-0 sudo[37451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:25 compute-0 sudo[37611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzoulpgmnswxokifoslkmzhoniludcrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845825.2444048-306-137543937849800/AnsiballZ_getent.py'
Jan 31 07:50:25 compute-0 sudo[37611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:25 compute-0 python3.9[37613]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 07:50:25 compute-0 sudo[37611]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:26 compute-0 sudo[37764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkkkybkdykdrwgrfagfuyayrfhmqbbmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845825.818914-314-243167774419769/AnsiballZ_group.py'
Jan 31 07:50:26 compute-0 sudo[37764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:26 compute-0 python3.9[37766]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:50:26 compute-0 groupadd[37767]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 31 07:50:26 compute-0 groupadd[37767]: group added to /etc/gshadow: name=hugetlbfs
Jan 31 07:50:26 compute-0 groupadd[37767]: new group: name=hugetlbfs, GID=42477
Jan 31 07:50:26 compute-0 sudo[37764]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:26 compute-0 sshd-session[37190]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:50:26 compute-0 sshd-session[37190]: Connection reset by 176.65.134.22 port 41072
Jan 31 07:50:26 compute-0 sudo[37923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuowflokwtqmcrdxzfeijuhhinexguft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845826.5348797-323-219467228734243/AnsiballZ_file.py'
Jan 31 07:50:26 compute-0 sudo[37923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:26 compute-0 python3.9[37925]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 07:50:27 compute-0 sudo[37923]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:27 compute-0 sudo[38075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etnczxzkyuxgnayhsfomoxcigkjnmraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845827.2717824-334-207798222884226/AnsiballZ_dnf.py'
Jan 31 07:50:27 compute-0 sudo[38075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:27 compute-0 python3.9[38077]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:50:29 compute-0 sudo[38075]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:29 compute-0 sshd-session[37872]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:50:29 compute-0 sshd-session[37872]: Connection reset by 176.65.134.22 port 34132
Jan 31 07:50:29 compute-0 sudo[38229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deljkipuovixncjdqsjtvudbzfpfpccj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845829.467939-342-136294028548039/AnsiballZ_file.py'
Jan 31 07:50:29 compute-0 sudo[38229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:29 compute-0 python3.9[38231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:50:29 compute-0 sudo[38229]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:30 compute-0 sudo[38381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pclnenvculxcyyofhaweouwgkrjfzpbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845830.0129163-350-192456750837866/AnsiballZ_stat.py'
Jan 31 07:50:30 compute-0 sudo[38381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:30 compute-0 python3.9[38383]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:50:30 compute-0 sudo[38381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:30 compute-0 sudo[38504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtmsftswsbhreqryuhiyqkzweocjgmuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845830.0129163-350-192456750837866/AnsiballZ_copy.py'
Jan 31 07:50:30 compute-0 sudo[38504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:30 compute-0 python3.9[38506]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845830.0129163-350-192456750837866/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:50:31 compute-0 sudo[38504]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:31 compute-0 sudo[38656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpomopuyuczprzwybppivekqjtzxhnls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845831.1365657-365-72572891420915/AnsiballZ_systemd.py'
Jan 31 07:50:31 compute-0 sudo[38656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:31 compute-0 python3.9[38658]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:50:31 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 07:50:32 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 07:50:32 compute-0 kernel: Bridge firewalling registered
Jan 31 07:50:32 compute-0 systemd-modules-load[38662]: Inserted module 'br_netfilter'
Jan 31 07:50:32 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 07:50:32 compute-0 sudo[38656]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:32 compute-0 sudo[38815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpsjphililuyhhvqliccwplvrukocnuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845832.2818317-373-216874936004878/AnsiballZ_stat.py'
Jan 31 07:50:32 compute-0 sudo[38815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:32 compute-0 sshd-session[38211]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:50:32 compute-0 sshd-session[38211]: Connection reset by 176.65.134.22 port 55292
Jan 31 07:50:32 compute-0 python3.9[38817]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:50:32 compute-0 sudo[38815]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:32 compute-0 sudo[38939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtchhelpxiuvfjurmonjftxkzihtaitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845832.2818317-373-216874936004878/AnsiballZ_copy.py'
Jan 31 07:50:32 compute-0 sudo[38939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:33 compute-0 python3.9[38941]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845832.2818317-373-216874936004878/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:50:33 compute-0 sudo[38939]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:33 compute-0 sudo[39091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gotnofjmkegnwedxdodbcowwxtgdqrgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845833.4411027-391-97031616143267/AnsiballZ_dnf.py'
Jan 31 07:50:33 compute-0 sudo[39091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:33 compute-0 python3.9[39093]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:50:36 compute-0 dbus-broker-launch[786]: Noticed file-system modification, trigger reload.
Jan 31 07:50:36 compute-0 dbus-broker-launch[786]: Noticed file-system modification, trigger reload.
Jan 31 07:50:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:50:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:50:37 compute-0 systemd[1]: Reloading.
Jan 31 07:50:37 compute-0 systemd-rc-local-generator[39156]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:50:37 compute-0 systemd[1]: Starting dnf makecache...
Jan 31 07:50:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:50:37 compute-0 dnf[39166]: Failed determining last makecache time.
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-barbican-42b4c41831408a8e323 102 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-python-glean-642fffe0203a8ffcc2443db52 141 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-cinder-1c00d6490d88e436f26ef 154 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-python-stevedore-c4acc5639fd2329372142 143 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-python-cloudkitty-tests-tempest-783703 126 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-diskimage-builder-61b717cc45660834fe9a 146 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-nova-eaa65f0b85123a4ee343246 153 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-python-designate-tests-tempest-347fdbc 187 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-glance-1fd12c29b339f30fe823e 172 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 183 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-manila-d783d10e75495b73866db 168 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-neutron-95cadbd379667c8520c8 166 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-octavia-5975097dd4b021385178 156 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-watcher-c014f81a8647287f6dcc 175 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-python-tcib-78032d201b02cee27e8e644c61 151 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 164 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-swift-dc98a8463506ac520c469a 156 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-python-tempestconf-8515371b7cceebd4282 153 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 dnf[39166]: delorean-openstack-heat-ui-013accbfd179753bc3f0 130 kB/s | 3.0 kB     00:00
Jan 31 07:50:37 compute-0 sudo[39091]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:38 compute-0 dnf[39166]: CentOS Stream 9 - BaseOS                         58 kB/s | 6.1 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: CentOS Stream 9 - AppStream                      67 kB/s | 6.5 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: CentOS Stream 9 - CRB                            27 kB/s | 6.0 kB     00:00
Jan 31 07:50:38 compute-0 python3.9[41168]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:50:38 compute-0 dnf[39166]: CentOS Stream 9 - Extras packages                75 kB/s | 7.3 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: dlrn-antelope-testing                           146 kB/s | 3.0 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: dlrn-antelope-build-deps                        130 kB/s | 3.0 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: centos9-rabbitmq                                 83 kB/s | 3.0 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: centos9-storage                                 114 kB/s | 3.0 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: centos9-opstools                                120 kB/s | 3.0 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: NFV SIG OpenvSwitch                              37 kB/s | 3.0 kB     00:00
Jan 31 07:50:38 compute-0 dnf[39166]: repo-setup-centos-appstream                     181 kB/s | 4.4 kB     00:00
Jan 31 07:50:39 compute-0 dnf[39166]: repo-setup-centos-baseos                         91 kB/s | 3.9 kB     00:00
Jan 31 07:50:39 compute-0 dnf[39166]: repo-setup-centos-highavailability              163 kB/s | 3.9 kB     00:00
Jan 31 07:50:39 compute-0 dnf[39166]: repo-setup-centos-powertools                     56 kB/s | 4.3 kB     00:00
Jan 31 07:50:39 compute-0 python3.9[42460]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 07:50:39 compute-0 dnf[39166]: Extra Packages for Enterprise Linux 9 - x86_64  105 kB/s |  31 kB     00:00
Jan 31 07:50:39 compute-0 python3.9[43187]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:50:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:50:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:50:39 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.257s CPU time.
Jan 31 07:50:39 compute-0 systemd[1]: run-r237147683003480b9faf1a6245611cd3.service: Deactivated successfully.
Jan 31 07:50:40 compute-0 dnf[39166]: Metadata cache created.
Jan 31 07:50:40 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 07:50:40 compute-0 systemd[1]: Finished dnf makecache.
Jan 31 07:50:40 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.736s CPU time.
Jan 31 07:50:40 compute-0 sudo[43339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bamqaqmfosyxkfzjnljnnvbewcsqluqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845840.0241354-430-161886656534603/AnsiballZ_command.py'
Jan 31 07:50:40 compute-0 sudo[43339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:40 compute-0 python3.9[43341]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:50:40 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 07:50:40 compute-0 systemd[1]: Starting Authorization Manager...
Jan 31 07:50:40 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 07:50:40 compute-0 polkitd[43559]: Started polkitd version 0.117
Jan 31 07:50:41 compute-0 polkitd[43559]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 07:50:41 compute-0 polkitd[43559]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 07:50:41 compute-0 polkitd[43559]: Finished loading, compiling and executing 2 rules
Jan 31 07:50:41 compute-0 systemd[1]: Started Authorization Manager.
Jan 31 07:50:41 compute-0 polkitd[43559]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 31 07:50:41 compute-0 sudo[43339]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:41 compute-0 sudo[43727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xakcmcnvdpefbkrgnodqwqsamujpntmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845841.298371-439-201152024674998/AnsiballZ_systemd.py'
Jan 31 07:50:41 compute-0 sudo[43727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:41 compute-0 python3.9[43729]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:50:41 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 07:50:41 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 07:50:41 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 07:50:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 07:50:42 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 07:50:42 compute-0 sudo[43727]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:42 compute-0 python3.9[43891]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 07:50:44 compute-0 sudo[44041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozqchlydpyjqjgdmjkzrpvhsmwuxrwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845844.0811422-496-55362045828798/AnsiballZ_systemd.py'
Jan 31 07:50:44 compute-0 sudo[44041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:44 compute-0 python3.9[44043]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:50:45 compute-0 systemd[1]: Reloading.
Jan 31 07:50:45 compute-0 systemd-rc-local-generator[44073]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:50:45 compute-0 sudo[44041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:45 compute-0 sshd-session[38818]: Connection closed by authenticating user root 176.65.134.22 port 47660 [preauth]
Jan 31 07:50:46 compute-0 sudo[44231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xydxuwqftuxncoqnywsesamkkacalprr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845845.9560301-496-201833881982747/AnsiballZ_systemd.py'
Jan 31 07:50:46 compute-0 sudo[44231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:46 compute-0 python3.9[44233]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:50:46 compute-0 systemd[1]: Reloading.
Jan 31 07:50:46 compute-0 systemd-rc-local-generator[44254]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:50:46 compute-0 sudo[44231]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:47 compute-0 sudo[44419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bifhelzwgsrpahqehvylgfdwzzzfkyte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845846.8149116-512-219481053429601/AnsiballZ_command.py'
Jan 31 07:50:47 compute-0 sudo[44419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:47 compute-0 python3.9[44421]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:50:47 compute-0 sudo[44419]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:47 compute-0 sudo[44572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmerlzlazvglhnumyfjajapueqkhinvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845847.3739107-520-150478714352757/AnsiballZ_command.py'
Jan 31 07:50:47 compute-0 sudo[44572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:47 compute-0 python3.9[44574]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:50:47 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 07:50:47 compute-0 sudo[44572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:48 compute-0 sudo[44725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfbsgpprbwsipunytsunocidsqqodyjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845847.9345994-528-61933368302982/AnsiballZ_command.py'
Jan 31 07:50:48 compute-0 sudo[44725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:48 compute-0 python3.9[44727]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:50:49 compute-0 sudo[44725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:50 compute-0 sudo[44888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahgkiuasdvpiwmjeyncftihuclteyery ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845849.8455553-536-144583306606259/AnsiballZ_command.py'
Jan 31 07:50:50 compute-0 sudo[44888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:50 compute-0 python3.9[44890]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:50:50 compute-0 sudo[44888]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:50 compute-0 sudo[45041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwuhlbpjaumareiuumacbsswgvcaurhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845850.368625-544-166235970008536/AnsiballZ_systemd.py'
Jan 31 07:50:50 compute-0 sudo[45041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:50 compute-0 python3.9[45043]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:50:50 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 07:50:50 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 07:50:50 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 07:50:50 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 31 07:50:50 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 07:50:50 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 31 07:50:50 compute-0 sudo[45041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:51 compute-0 sshd-session[31362]: Connection closed by 192.168.122.30 port 56560
Jan 31 07:50:51 compute-0 sshd-session[31359]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:50:51 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 07:50:51 compute-0 systemd[1]: session-9.scope: Consumed 2min 2.047s CPU time.
Jan 31 07:50:51 compute-0 systemd-logind[810]: Session 9 logged out. Waiting for processes to exit.
Jan 31 07:50:51 compute-0 systemd-logind[810]: Removed session 9.
Jan 31 07:50:57 compute-0 sshd-session[45073]: Accepted publickey for zuul from 192.168.122.30 port 55978 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:50:57 compute-0 systemd-logind[810]: New session 10 of user zuul.
Jan 31 07:50:57 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 31 07:50:57 compute-0 sshd-session[45073]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:50:58 compute-0 python3.9[45226]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:50:59 compute-0 sudo[45380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufbcclioimhhiqdwpbxmfaiucyoxigim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845858.6369665-31-251426864690162/AnsiballZ_getent.py'
Jan 31 07:50:59 compute-0 sudo[45380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:59 compute-0 python3.9[45382]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 07:50:59 compute-0 sudo[45380]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:59 compute-0 sudo[45533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtjvagduolwqzbstulvsxboqcsakaoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845859.3438725-39-102510951492683/AnsiballZ_group.py'
Jan 31 07:50:59 compute-0 sudo[45533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:50:59 compute-0 python3.9[45535]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:50:59 compute-0 groupadd[45536]: group added to /etc/group: name=openvswitch, GID=42476
Jan 31 07:50:59 compute-0 groupadd[45536]: group added to /etc/gshadow: name=openvswitch
Jan 31 07:50:59 compute-0 groupadd[45536]: new group: name=openvswitch, GID=42476
Jan 31 07:50:59 compute-0 sudo[45533]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:00 compute-0 sudo[45691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwrhlxjobzsnmhyftvyyfsgkskcjmhtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845860.1127431-47-74024323046299/AnsiballZ_user.py'
Jan 31 07:51:00 compute-0 sudo[45691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:00 compute-0 python3.9[45693]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:51:00 compute-0 useradd[45695]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 07:51:00 compute-0 useradd[45695]: add 'openvswitch' to group 'hugetlbfs'
Jan 31 07:51:00 compute-0 useradd[45695]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 31 07:51:00 compute-0 sudo[45691]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:01 compute-0 sudo[45851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfpkicuadabxayceiojgawxywupjakbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845861.1449957-57-247109413562479/AnsiballZ_setup.py'
Jan 31 07:51:01 compute-0 sudo[45851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:01 compute-0 sshd-session[44150]: Connection closed by authenticating user root 176.65.134.22 port 50618 [preauth]
Jan 31 07:51:01 compute-0 python3.9[45853]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:51:01 compute-0 sudo[45851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:02 compute-0 sudo[45936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqhotyemmebnvavbqiicgodckejhurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845861.1449957-57-247109413562479/AnsiballZ_dnf.py'
Jan 31 07:51:02 compute-0 sudo[45936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:02 compute-0 python3.9[45938]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:51:04 compute-0 sshd-session[45862]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:51:04 compute-0 sshd-session[45862]: Connection reset by 176.65.134.22 port 50786
Jan 31 07:51:04 compute-0 sudo[45936]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:05 compute-0 sudo[46101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-palnfczgyxcrjbzmemffvnwqzyeyevmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845864.9283526-71-243751852724848/AnsiballZ_dnf.py'
Jan 31 07:51:05 compute-0 sudo[46101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:05 compute-0 python3.9[46103]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:51:07 compute-0 sshd-session[45975]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:51:07 compute-0 sshd-session[45975]: Connection reset by 176.65.134.22 port 53766
Jan 31 07:51:16 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 07:51:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:51:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:51:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:51:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:51:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:51:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:51:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:51:16 compute-0 groupadd[46128]: group added to /etc/group: name=unbound, GID=994
Jan 31 07:51:16 compute-0 groupadd[46128]: group added to /etc/gshadow: name=unbound
Jan 31 07:51:16 compute-0 groupadd[46128]: new group: name=unbound, GID=994
Jan 31 07:51:16 compute-0 useradd[46135]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 31 07:51:16 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 07:51:16 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 07:51:17 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:51:17 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:51:18 compute-0 systemd[1]: Reloading.
Jan 31 07:51:18 compute-0 systemd-sysv-generator[46637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:51:18 compute-0 systemd-rc-local-generator[46633]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:51:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:51:18 compute-0 sudo[46101]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:51:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:51:18 compute-0 systemd[1]: run-r081f0eb620a94f3ea097a4786dedb2b4.service: Deactivated successfully.
Jan 31 07:51:19 compute-0 sudo[47202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pltbytzlaekakjihgsrpdkfsoimondrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845878.8530996-79-44011733747147/AnsiballZ_systemd.py'
Jan 31 07:51:19 compute-0 sudo[47202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:19 compute-0 python3.9[47204]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:51:19 compute-0 systemd[1]: Reloading.
Jan 31 07:51:19 compute-0 systemd-sysv-generator[47235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:51:19 compute-0 systemd-rc-local-generator[47230]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:51:19 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 07:51:19 compute-0 chown[47246]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 07:51:19 compute-0 ovs-ctl[47251]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 07:51:20 compute-0 ovs-ctl[47251]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 07:51:20 compute-0 ovs-ctl[47251]: Starting ovsdb-server [  OK  ]
Jan 31 07:51:20 compute-0 ovs-vsctl[47300]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 07:51:20 compute-0 ovs-vsctl[47320]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"41f56c18-6e96-48c3-b4a0-6aca47eb55b4\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 07:51:20 compute-0 ovs-ctl[47251]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 07:51:20 compute-0 sshd-session[46118]: Invalid user admin from 176.65.134.22 port 36848
Jan 31 07:51:20 compute-0 ovs-vsctl[47326]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 07:51:20 compute-0 ovs-ctl[47251]: Enabling remote OVSDB managers [  OK  ]
Jan 31 07:51:20 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 07:51:20 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 07:51:20 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 07:51:20 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 07:51:20 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 07:51:20 compute-0 ovs-ctl[47370]: Inserting openvswitch module [  OK  ]
Jan 31 07:51:20 compute-0 ovs-ctl[47339]: Starting ovs-vswitchd [  OK  ]
Jan 31 07:51:20 compute-0 ovs-vsctl[47389]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 07:51:20 compute-0 ovs-ctl[47339]: Enabling remote OVSDB managers [  OK  ]
Jan 31 07:51:20 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 07:51:20 compute-0 systemd[1]: Starting Open vSwitch...
Jan 31 07:51:20 compute-0 systemd[1]: Finished Open vSwitch.
Jan 31 07:51:20 compute-0 sudo[47202]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:21 compute-0 python3.9[47540]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:51:21 compute-0 sudo[47690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koacdltgdneqgdgopzcjfxrdbajmjfqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845881.4529254-97-130603951592746/AnsiballZ_sefcontext.py'
Jan 31 07:51:21 compute-0 sudo[47690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:22 compute-0 sshd-session[46118]: Connection closed by invalid user admin 176.65.134.22 port 36848 [preauth]
Jan 31 07:51:22 compute-0 python3.9[47692]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 07:51:23 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 07:51:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:51:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:51:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:51:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:51:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:51:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:51:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:51:23 compute-0 sudo[47690]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:24 compute-0 python3.9[47848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:51:24 compute-0 sudo[48004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkgfbahppygmwxkwunrpuwgvwusteepu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845884.453266-115-71610757983725/AnsiballZ_dnf.py'
Jan 31 07:51:24 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 07:51:24 compute-0 sudo[48004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:24 compute-0 python3.9[48006]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:51:26 compute-0 sudo[48004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:26 compute-0 sudo[48158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekmnkrmmphxfmbbdmynmdsqvlspupxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845886.166242-123-200925289736118/AnsiballZ_command.py'
Jan 31 07:51:26 compute-0 sudo[48158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:26 compute-0 python3.9[48160]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:51:27 compute-0 sudo[48158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:27 compute-0 sudo[48445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsdvicbbfvcjpjguityoasqpyqmymgch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845887.482929-131-180684707982004/AnsiballZ_file.py'
Jan 31 07:51:27 compute-0 sudo[48445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:28 compute-0 python3.9[48447]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 07:51:28 compute-0 sudo[48445]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:28 compute-0 python3.9[48597]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:51:29 compute-0 sudo[48749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egxqhbllemxczugeecbwbmcvxvreapng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845888.929841-147-7324941948291/AnsiballZ_dnf.py'
Jan 31 07:51:29 compute-0 sudo[48749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:29 compute-0 python3.9[48751]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:51:30 compute-0 sshd-session[48754]: Connection closed by 193.32.162.145 port 41690
Jan 31 07:51:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:51:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:51:31 compute-0 systemd[1]: Reloading.
Jan 31 07:51:31 compute-0 systemd-sysv-generator[48792]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:51:31 compute-0 systemd-rc-local-generator[48789]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:51:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:51:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:51:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:51:31 compute-0 systemd[1]: run-rf10adf7eecf54eaf81841ab8f6608178.service: Deactivated successfully.
Jan 31 07:51:31 compute-0 sudo[48749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:32 compute-0 sudo[49067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxjftgtkcyhsuvciqhngacykyxpwgvft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845891.949513-155-137754525701919/AnsiballZ_systemd.py'
Jan 31 07:51:32 compute-0 sudo[49067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:32 compute-0 python3.9[49069]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:51:32 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 07:51:32 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 07:51:32 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 07:51:32 compute-0 systemd[1]: Stopping Network Manager...
Jan 31 07:51:32 compute-0 NetworkManager[7190]: <info>  [1769845892.5093] caught SIGTERM, shutting down normally.
Jan 31 07:51:32 compute-0 NetworkManager[7190]: <info>  [1769845892.5106] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:51:32 compute-0 NetworkManager[7190]: <info>  [1769845892.5107] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:51:32 compute-0 NetworkManager[7190]: <info>  [1769845892.5107] dhcp4 (eth0): state changed no lease
Jan 31 07:51:32 compute-0 NetworkManager[7190]: <info>  [1769845892.5111] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:51:32 compute-0 NetworkManager[7190]: <info>  [1769845892.5182] exiting (success)
Jan 31 07:51:32 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:51:32 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 07:51:32 compute-0 systemd[1]: Stopped Network Manager.
Jan 31 07:51:32 compute-0 systemd[1]: NetworkManager.service: Consumed 17.043s CPU time, 4.1M memory peak, read 0B from disk, written 26.0K to disk.
Jan 31 07:51:32 compute-0 systemd[1]: Starting Network Manager...
Jan 31 07:51:32 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.5619] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:cfd2689d-5023-49cf-871a-74cb51f0f7c6)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.5620] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.5656] manager[0x55ce4377c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 07:51:32 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 07:51:32 compute-0 systemd[1]: Started Hostname Service.
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6502] hostname: hostname: using hostnamed
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6504] hostname: static hostname changed from (none) to "compute-0"
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6507] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6511] manager[0x55ce4377c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6511] manager[0x55ce4377c000]: rfkill: WWAN hardware radio set enabled
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6530] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6539] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6539] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6539] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6540] manager: Networking is enabled by state file
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6541] settings: Loaded settings plugin: keyfile (internal)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6544] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6567] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6573] dhcp: init: Using DHCP client 'internal'
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6575] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6578] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6582] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6588] device (lo): Activation: starting connection 'lo' (19c7276e-9b34-4ddb-9414-c336dedfbb59)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6592] device (eth0): carrier: link connected
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6594] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6598] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6598] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6603] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6608] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6612] device (eth1): carrier: link connected
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6615] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6619] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (39443809-bacc-53e1-8f0a-bd4718cbb099) (indicated)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6619] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6623] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6629] device (eth1): Activation: starting connection 'ci-private-network' (39443809-bacc-53e1-8f0a-bd4718cbb099)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6633] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 07:51:32 compute-0 systemd[1]: Started Network Manager.
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6646] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6649] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6651] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6653] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6656] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6658] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6661] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6664] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6670] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6672] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6681] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6693] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6707] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6711] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6719] device (lo): Activation: successful, device activated.
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6733] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Jan 31 07:51:32 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6746] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6841] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6847] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6852] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6858] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6861] device (eth1): Activation: successful, device activated.
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6881] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6884] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6889] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6892] device (eth0): Activation: successful, device activated.
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6897] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 07:51:32 compute-0 NetworkManager[49077]: <info>  [1769845892.6901] manager: startup complete
Jan 31 07:51:32 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 31 07:51:32 compute-0 sudo[49067]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:33 compute-0 sudo[49293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynaidkfvazilkckdchyopcxosrtbjhlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845892.8447042-163-75050322707561/AnsiballZ_dnf.py'
Jan 31 07:51:33 compute-0 sudo[49293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:33 compute-0 python3.9[49295]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:51:34 compute-0 sshd-session[47693]: Invalid user admin from 176.65.134.22 port 59432
Jan 31 07:51:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:51:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:51:37 compute-0 systemd[1]: Reloading.
Jan 31 07:51:37 compute-0 systemd-rc-local-generator[49341]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:51:37 compute-0 systemd-sysv-generator[49353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:51:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:51:37 compute-0 sshd-session[47693]: Connection closed by invalid user admin 176.65.134.22 port 59432 [preauth]
Jan 31 07:51:38 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:51:38 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:51:38 compute-0 systemd[1]: run-r5ef7e98319934de8bbd0915f8bd4fd99.service: Deactivated successfully.
Jan 31 07:51:38 compute-0 sudo[49293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:38 compute-0 sudo[49755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydrebmjaqokocbxvhfsfoyyxvrvidlbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845898.5482295-175-84846708443515/AnsiballZ_stat.py'
Jan 31 07:51:38 compute-0 sudo[49755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:38 compute-0 python3.9[49757]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:51:38 compute-0 sudo[49755]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:39 compute-0 sudo[49907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppoocrnotlxzzbujjaeyedgcsqctjudz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845899.125661-184-50182642012195/AnsiballZ_ini_file.py'
Jan 31 07:51:39 compute-0 sudo[49907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:39 compute-0 python3.9[49909]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:39 compute-0 sudo[49907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:40 compute-0 sudo[50061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwmkmxbolcwvwwbpaaebifmgydraphru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845899.8786507-194-18116375590665/AnsiballZ_ini_file.py'
Jan 31 07:51:40 compute-0 sudo[50061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:40 compute-0 python3.9[50063]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:40 compute-0 sudo[50061]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:40 compute-0 sudo[50213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcmhvtcgipeihsorvvhciknlzfxveuco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845900.3954065-194-214742379439645/AnsiballZ_ini_file.py'
Jan 31 07:51:40 compute-0 sudo[50213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:40 compute-0 sshd-session[49535]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:51:40 compute-0 sshd-session[49535]: Connection reset by 176.65.134.22 port 47826
Jan 31 07:51:40 compute-0 python3.9[50215]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:40 compute-0 sudo[50213]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:41 compute-0 sudo[50366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smdykonoffgxkshoguwvqulepkhaeihp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845900.9230497-209-261504338732368/AnsiballZ_ini_file.py'
Jan 31 07:51:41 compute-0 sudo[50366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:41 compute-0 python3.9[50368]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:41 compute-0 sudo[50366]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:41 compute-0 sudo[50518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcigucpflzpekzdwltjpvucpbirafvdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845901.5271406-209-57283003970711/AnsiballZ_ini_file.py'
Jan 31 07:51:41 compute-0 sudo[50518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:41 compute-0 python3.9[50520]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:41 compute-0 sudo[50518]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:42 compute-0 sudo[50670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxffmkrzsvxqvyoxgrjwkavfdupppzki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845902.056895-224-66360580313390/AnsiballZ_stat.py'
Jan 31 07:51:42 compute-0 sudo[50670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:42 compute-0 python3.9[50672]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:51:42 compute-0 sudo[50670]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:42 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:51:42 compute-0 sudo[50793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aahauxpiexmstyggwwslfdosuyisyxih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845902.056895-224-66360580313390/AnsiballZ_copy.py'
Jan 31 07:51:42 compute-0 sudo[50793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:43 compute-0 python3.9[50795]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845902.056895-224-66360580313390/.source _original_basename=.bqk45wyv follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:43 compute-0 sudo[50793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:43 compute-0 sudo[50945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpducrbeqoywlntqpufiohbawrzaqqfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845903.3162649-239-159017114195305/AnsiballZ_file.py'
Jan 31 07:51:43 compute-0 sudo[50945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:43 compute-0 sshd-session[50216]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:51:43 compute-0 sshd-session[50216]: Connection reset by 176.65.134.22 port 41630
Jan 31 07:51:43 compute-0 python3.9[50947]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:43 compute-0 sudo[50945]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:44 compute-0 sudo[51098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxeuuaslhvapjotsenmpzxeneobtjge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845903.8771813-247-113333880315522/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 31 07:51:44 compute-0 sudo[51098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:44 compute-0 python3.9[51100]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 07:51:44 compute-0 sudo[51098]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:44 compute-0 sudo[51250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsoxapyuibwplubsuuqugzrkzekjlcik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845904.634455-256-49618664305982/AnsiballZ_file.py'
Jan 31 07:51:44 compute-0 sudo[51250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:45 compute-0 python3.9[51252]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:51:45 compute-0 sudo[51250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:45 compute-0 sudo[51402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgtsxwhswbnnezckyrficofaknmrwvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845905.330362-266-105410540568577/AnsiballZ_stat.py'
Jan 31 07:51:45 compute-0 sudo[51402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:45 compute-0 sudo[51402]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:46 compute-0 sudo[51526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyeozlwmhpngzaqqfwyxzbljcckgynog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845905.330362-266-105410540568577/AnsiballZ_copy.py'
Jan 31 07:51:46 compute-0 sudo[51526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:46 compute-0 sudo[51526]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:46 compute-0 sudo[51678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsxfwovwhhslcpokhyxcakuncdomnrtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845906.4271975-281-101222798490296/AnsiballZ_slurp.py'
Jan 31 07:51:46 compute-0 sudo[51678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:46 compute-0 python3.9[51680]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 07:51:46 compute-0 sudo[51678]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:47 compute-0 sudo[51853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nozkggatbtfgdbmnnnzkiafrbsqkjfio ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845907.1565104-290-227358138517333/async_wrapper.py j421730687349 300 /home/zuul/.ansible/tmp/ansible-tmp-1769845907.1565104-290-227358138517333/AnsiballZ_edpm_os_net_config.py _'
Jan 31 07:51:47 compute-0 sudo[51853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:47 compute-0 ansible-async_wrapper.py[51855]: Invoked with j421730687349 300 /home/zuul/.ansible/tmp/ansible-tmp-1769845907.1565104-290-227358138517333/AnsiballZ_edpm_os_net_config.py _
Jan 31 07:51:47 compute-0 ansible-async_wrapper.py[51858]: Starting module and watcher
Jan 31 07:51:47 compute-0 ansible-async_wrapper.py[51858]: Start watching 51859 (300)
Jan 31 07:51:47 compute-0 ansible-async_wrapper.py[51859]: Start module (51859)
Jan 31 07:51:47 compute-0 ansible-async_wrapper.py[51855]: Return async_wrapper task started.
Jan 31 07:51:47 compute-0 sudo[51853]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:48 compute-0 python3.9[51860]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 07:51:48 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 07:51:48 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 07:51:48 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 07:51:48 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 07:51:48 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7502] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7516] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7918] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7919] audit: op="connection-add" uuid="fac718b4-b03a-45e0-9ba0-0aae3fabe836" name="br-ex-br" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7933] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7934] audit: op="connection-add" uuid="f6f20d26-9ac8-4e9c-9dff-e435e9672047" name="br-ex-port" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7945] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7946] audit: op="connection-add" uuid="ab2b1ef3-b4d9-4f5a-b652-c3a8f4e3d689" name="eth1-port" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7955] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7957] audit: op="connection-add" uuid="798c4f7c-7042-42ea-b333-ee7d64fea4b9" name="vlan20-port" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7966] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7967] audit: op="connection-add" uuid="7d70fcd1-2a2b-475f-9d2e-2bf1043c2183" name="vlan21-port" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7976] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7977] audit: op="connection-add" uuid="665e8497-c949-456f-957a-24a11f388c01" name="vlan22-port" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7988] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.7989] audit: op="connection-add" uuid="9c481e56-a0f5-44df-a51e-2c033baffb6a" name="vlan23-port" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8010] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8024] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8026] audit: op="connection-add" uuid="43aa4e65-e30f-4b97-a89a-be76dfd6b93f" name="br-ex-if" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8074] audit: op="connection-update" uuid="39443809-bacc-53e1-8f0a-bd4718cbb099" name="ci-private-network" args="ovs-interface.type,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.never-default,ipv4.routes,connection.master,connection.timestamp,connection.slave-type,connection.port-type,connection.controller,ipv6.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv6.routes,ovs-external-ids.data" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8089] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8090] audit: op="connection-add" uuid="4994ffdb-eed3-44d0-8d9e-5a9dcda28d83" name="vlan20-if" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8105] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8107] audit: op="connection-add" uuid="bb6537fe-3827-4084-bd4f-9141c6e1d124" name="vlan21-if" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8120] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8121] audit: op="connection-add" uuid="353b0bd2-e6dc-4a05-8951-e383d9a15ba0" name="vlan22-if" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8136] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8137] audit: op="connection-add" uuid="2d6ed329-d6bf-4bcc-af13-8798a80a53b5" name="vlan23-if" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8147] audit: op="connection-delete" uuid="ac4dec35-f971-3578-8a87-5dd4fcab175b" name="Wired connection 1" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8157] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8159] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8165] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8169] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (fac718b4-b03a-45e0-9ba0-0aae3fabe836)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8170] audit: op="connection-activate" uuid="fac718b4-b03a-45e0-9ba0-0aae3fabe836" name="br-ex-br" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8171] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8172] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8177] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8181] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f6f20d26-9ac8-4e9c-9dff-e435e9672047)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8183] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8183] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8188] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8191] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ab2b1ef3-b4d9-4f5a-b652-c3a8f4e3d689)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8193] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8194] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8199] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8202] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (798c4f7c-7042-42ea-b333-ee7d64fea4b9)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8204] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8204] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8209] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8212] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7d70fcd1-2a2b-475f-9d2e-2bf1043c2183)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8214] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8215] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8219] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8223] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (665e8497-c949-456f-957a-24a11f388c01)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8225] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8225] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8230] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8234] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9c481e56-a0f5-44df-a51e-2c033baffb6a)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8234] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8237] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8239] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8244] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8245] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8248] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8251] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (43aa4e65-e30f-4b97-a89a-be76dfd6b93f)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8252] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8255] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8257] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8258] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8259] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8268] device (eth1): disconnecting for new activation request.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8268] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8271] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8273] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8274] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8277] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8278] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8280] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8284] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4994ffdb-eed3-44d0-8d9e-5a9dcda28d83)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8284] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8287] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8288] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8290] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8292] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8293] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8295] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8299] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (bb6537fe-3827-4084-bd4f-9141c6e1d124)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8300] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8302] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8304] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8305] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8307] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8308] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8311] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8315] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (353b0bd2-e6dc-4a05-8951-e383d9a15ba0)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8316] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8319] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8320] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8321] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8324] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <warn>  [1769845909.8324] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8327] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8331] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (2d6ed329-d6bf-4bcc-af13-8798a80a53b5)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8332] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8334] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8336] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8338] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8339] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8351] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8354] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8358] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8360] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8927] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8931] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8935] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8938] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8940] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8945] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8949] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8952] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8954] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8959] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8963] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:51:49 compute-0 kernel: Timeout policy base is empty
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8966] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8967] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8972] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 systemd-udevd[51864]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8975] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8978] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8979] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8984] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8988] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8988] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8988] dhcp4 (eth0): state changed no lease
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.8990] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9000] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9002] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51861 uid=0 result="fail" reason="Device is not activated"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9010] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9014] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9153] device (eth1): Activation: starting connection 'ci-private-network' (39443809-bacc-53e1-8f0a-bd4718cbb099)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9157] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9158] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9159] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9160] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9161] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9162] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9173] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9180] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9183] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 kernel: br-ex: entered promiscuous mode
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9196] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9200] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9204] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9208] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9210] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9213] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9215] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9218] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9222] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9224] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9227] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9230] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9233] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9235] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9248] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9251] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 kernel: vlan22: entered promiscuous mode
Jan 31 07:51:49 compute-0 systemd-udevd[51865]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9273] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9278] device (eth1): state change: ip-config -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9280] device (eth1)[Open vSwitch Port]: detaching ovs interface eth1
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9280] device (eth1): released from controller device eth1
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9286] device (eth1): disconnecting for new activation request.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9287] audit: op="connection-activate" uuid="39443809-bacc-53e1-8f0a-bd4718cbb099" name="ci-private-network" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9289] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9301] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:51:49 compute-0 kernel: vlan21: entered promiscuous mode
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9330] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 kernel: vlan20: entered promiscuous mode
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9359] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51861 uid=0 result="success"
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9360] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9368] device (eth1): Activation: starting connection 'ci-private-network' (39443809-bacc-53e1-8f0a-bd4718cbb099)
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9379] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9382] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9387] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9388] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9390] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9398] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9402] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9411] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9415] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9425] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:51:49 compute-0 kernel: vlan23: entered promiscuous mode
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9436] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9446] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9481] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9482] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9483] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9484] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9486] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9490] device (eth1): Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9493] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9496] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9506] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9509] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9519] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9526] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9546] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9552] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9554] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9558] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9565] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9566] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:51:49 compute-0 NetworkManager[49077]: <info>  [1769845909.9570] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.0809] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.2035] checkpoint[0x55ce43752950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.2037] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.4710] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 sudo[52217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xigqtkhcoggfumjvxlsqdsfejgonnmzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845911.043117-290-64084667295115/AnsiballZ_async_status.py'
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.4723] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.4726] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 sudo[52217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:51 compute-0 python3.9[52219]: ansible-ansible.legacy.async_status Invoked with jid=j421730687349.51855 mode=status _async_dir=/root/.ansible_async
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.6713] audit: op="networking-control" arg="global-dns-configuration" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.6740] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 07:51:51 compute-0 sudo[52217]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.6766] audit: op="networking-control" arg="global-dns-configuration" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.6787] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.8052] checkpoint[0x55ce43752a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 07:51:51 compute-0 NetworkManager[49077]: <info>  [1769845911.8058] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51861 uid=0 result="success"
Jan 31 07:51:51 compute-0 ansible-async_wrapper.py[51859]: Module complete (51859)
Jan 31 07:51:52 compute-0 ansible-async_wrapper.py[51858]: Done in kid B.
Jan 31 07:51:54 compute-0 sudo[52328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjauqdkvqoujnnbqasnqkockzldnmwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845911.043117-290-64084667295115/AnsiballZ_async_status.py'
Jan 31 07:51:54 compute-0 sudo[52328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:55 compute-0 python3.9[52330]: ansible-ansible.legacy.async_status Invoked with jid=j421730687349.51855 mode=status _async_dir=/root/.ansible_async
Jan 31 07:51:55 compute-0 sudo[52328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:55 compute-0 sudo[52428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoubhiwsgpikitwebticiqxfocztpdbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845911.043117-290-64084667295115/AnsiballZ_async_status.py'
Jan 31 07:51:55 compute-0 sudo[52428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:55 compute-0 python3.9[52430]: ansible-ansible.legacy.async_status Invoked with jid=j421730687349.51855 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 07:51:55 compute-0 sudo[52428]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:55 compute-0 sudo[52580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtdflrtnzbfjijhpyyxwcyezzcftyezz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845915.7398858-317-43788777294232/AnsiballZ_stat.py'
Jan 31 07:51:55 compute-0 sudo[52580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:51:57 compute-0 sshd-session[50969]: Invalid user dvr from 176.65.134.22 port 46044
Jan 31 07:51:59 compute-0 sshd-session[50969]: Connection closed by invalid user dvr 176.65.134.22 port 46044 [preauth]
Jan 31 07:52:01 compute-0 python3.9[52582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:52:01 compute-0 sudo[52580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:01 compute-0 sudo[52705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpgnpxajbuwhqvobqkrmfegkjrdfthvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845915.7398858-317-43788777294232/AnsiballZ_copy.py'
Jan 31 07:52:01 compute-0 sudo[52705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:01 compute-0 python3.9[52707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845915.7398858-317-43788777294232/.source.returncode _original_basename=.ac4nm55p follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:52:01 compute-0 sudo[52705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:01 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:52:02 compute-0 sudo[52858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgfywkapilpswiaepkhqlsfoaxliayee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845921.8554592-333-278850256569533/AnsiballZ_stat.py'
Jan 31 07:52:02 compute-0 sudo[52858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:02 compute-0 python3.9[52861]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:52:02 compute-0 sudo[52858]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:02 compute-0 sudo[52982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqfgmscsfabrbafczzsnfpbenoseeafg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845921.8554592-333-278850256569533/AnsiballZ_copy.py'
Jan 31 07:52:02 compute-0 sudo[52982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:02 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:52:02 compute-0 python3.9[52984]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845921.8554592-333-278850256569533/.source.cfg _original_basename=.esbt3bvl follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:52:02 compute-0 sudo[52982]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:03 compute-0 sudo[53136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uedpthynrzisnhgppnxyzmxqzdhqajdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845922.8654535-348-133578837798136/AnsiballZ_systemd.py'
Jan 31 07:52:03 compute-0 sudo[53136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:03 compute-0 python3.9[53138]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:52:03 compute-0 systemd[1]: Reloading Network Manager...
Jan 31 07:52:03 compute-0 NetworkManager[49077]: <info>  [1769845923.4333] audit: op="reload" arg="0" pid=53142 uid=0 result="success"
Jan 31 07:52:03 compute-0 NetworkManager[49077]: <info>  [1769845923.4341] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 07:52:03 compute-0 systemd[1]: Reloaded Network Manager.
Jan 31 07:52:03 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:52:03 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:52:03 compute-0 sudo[53136]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:03 compute-0 sshd-session[45076]: Connection closed by 192.168.122.30 port 55978
Jan 31 07:52:03 compute-0 sshd-session[45073]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:52:03 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 07:52:03 compute-0 systemd[1]: session-10.scope: Consumed 43.497s CPU time.
Jan 31 07:52:03 compute-0 systemd-logind[810]: Session 10 logged out. Waiting for processes to exit.
Jan 31 07:52:03 compute-0 systemd-logind[810]: Removed session 10.
Jan 31 07:52:11 compute-0 sshd-session[52584]: Invalid user nvr from 176.65.134.22 port 48778
Jan 31 07:52:12 compute-0 sshd-session[53178]: Accepted publickey for zuul from 192.168.122.30 port 51936 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:52:12 compute-0 systemd-logind[810]: New session 11 of user zuul.
Jan 31 07:52:13 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 31 07:52:13 compute-0 sshd-session[53178]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:52:13 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:52:13 compute-0 python3.9[53332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:52:14 compute-0 sshd-session[52584]: Connection closed by invalid user nvr 176.65.134.22 port 48778 [preauth]
Jan 31 07:52:14 compute-0 python3.9[53487]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:52:15 compute-0 python3.9[53681]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:52:16 compute-0 sshd-session[53181]: Connection closed by 192.168.122.30 port 51936
Jan 31 07:52:16 compute-0 sshd-session[53178]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:52:16 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 07:52:16 compute-0 systemd[1]: session-11.scope: Consumed 1.813s CPU time.
Jan 31 07:52:16 compute-0 systemd-logind[810]: Session 11 logged out. Waiting for processes to exit.
Jan 31 07:52:16 compute-0 systemd-logind[810]: Removed session 11.
Jan 31 07:52:17 compute-0 sshd-session[53488]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:52:17 compute-0 sshd-session[53488]: Connection reset by 176.65.134.22 port 33844
Jan 31 07:52:20 compute-0 sshd-session[53708]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:52:20 compute-0 sshd-session[53708]: Connection reset by 176.65.134.22 port 57588
Jan 31 07:52:21 compute-0 sshd-session[53711]: Accepted publickey for zuul from 192.168.122.30 port 51944 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:52:21 compute-0 systemd-logind[810]: New session 12 of user zuul.
Jan 31 07:52:21 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 31 07:52:21 compute-0 sshd-session[53711]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:52:22 compute-0 python3.9[53864]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:52:23 compute-0 python3.9[54018]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:52:24 compute-0 sudo[54173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwqdppqnmfobjtcyzdelhvycwqrpyikj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845943.873014-35-123902859320575/AnsiballZ_setup.py'
Jan 31 07:52:24 compute-0 sudo[54173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:24 compute-0 python3.9[54175]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:52:24 compute-0 sudo[54173]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:25 compute-0 sudo[54257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhdtfvqvvukfiadiojvbghyephpparl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845943.873014-35-123902859320575/AnsiballZ_dnf.py'
Jan 31 07:52:25 compute-0 sudo[54257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:25 compute-0 python3.9[54259]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:52:26 compute-0 sudo[54257]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:26 compute-0 sudo[54410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaojuoesavghejpnaijncjwdopavulwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845946.7245567-47-16953772128773/AnsiballZ_setup.py'
Jan 31 07:52:26 compute-0 sudo[54410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:27 compute-0 python3.9[54412]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:52:27 compute-0 sudo[54410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:28 compute-0 sudo[54605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsvbnxlzcnqjecblxtsjghccywvubysl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845947.785765-58-154734337688490/AnsiballZ_file.py'
Jan 31 07:52:28 compute-0 sudo[54605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:28 compute-0 python3.9[54607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:52:28 compute-0 sudo[54605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:28 compute-0 sudo[54757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlndctqzfovmmlzyiqachrndzcmbaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845948.5492992-66-27517254255620/AnsiballZ_command.py'
Jan 31 07:52:28 compute-0 sudo[54757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:29 compute-0 python3.9[54759]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4100905465-merged.mount: Deactivated successfully.
Jan 31 07:52:29 compute-0 podman[54760]: 2026-01-31 07:52:29.234244227 +0000 UTC m=+0.063920631 system refresh
Jan 31 07:52:29 compute-0 sudo[54757]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:29 compute-0 sudo[54918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhzkkdaszlsngpefnkbtsdaggswczck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845949.4153237-74-155140326176681/AnsiballZ_stat.py'
Jan 31 07:52:29 compute-0 sudo[54918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:30 compute-0 python3.9[54920]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:52:30 compute-0 sudo[54918]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:52:30 compute-0 sudo[55041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsklbgrawrejrhgqkktbteseahkkzrih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845949.4153237-74-155140326176681/AnsiballZ_copy.py'
Jan 31 07:52:30 compute-0 sudo[55041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:30 compute-0 python3.9[55043]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769845949.4153237-74-155140326176681/.source.json follow=False _original_basename=podman_network_config.j2 checksum=4d467f43110e735f57edb8e82f02479e66504546 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:52:30 compute-0 sudo[55041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:31 compute-0 sudo[55193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjsiirptsgzddfpnnlhmexahuwqgxqki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845950.8382533-89-4928872377404/AnsiballZ_stat.py'
Jan 31 07:52:31 compute-0 sudo[55193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:31 compute-0 python3.9[55195]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:52:31 compute-0 sudo[55193]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:31 compute-0 sudo[55316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfufvpcbgnfbdsfdpkhwjmhaehpohliv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845950.8382533-89-4928872377404/AnsiballZ_copy.py'
Jan 31 07:52:31 compute-0 sudo[55316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:31 compute-0 python3.9[55318]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845950.8382533-89-4928872377404/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5a7b852ef59dc957321c42a5221bc3eee5ce78e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:52:31 compute-0 sudo[55316]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:32 compute-0 sudo[55468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxclsumhzqppklfsfwffvglfmwrojzjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845951.859041-105-90327538650525/AnsiballZ_ini_file.py'
Jan 31 07:52:32 compute-0 sudo[55468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:32 compute-0 python3.9[55470]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:52:32 compute-0 sudo[55468]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:32 compute-0 sudo[55620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fefvixjfweadbibfgzgcnajcsdqxotjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845952.5755053-105-236265122901475/AnsiballZ_ini_file.py'
Jan 31 07:52:32 compute-0 sudo[55620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:32 compute-0 python3.9[55622]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:52:32 compute-0 sshd-session[53710]: Invalid user ssh from 176.65.134.22 port 49708
Jan 31 07:52:32 compute-0 sudo[55620]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:33 compute-0 sudo[55772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzdtcmkkocsrnuoowokjmghggsyuuad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845953.0786533-105-162343347622796/AnsiballZ_ini_file.py'
Jan 31 07:52:33 compute-0 sudo[55772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:33 compute-0 python3.9[55774]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:52:33 compute-0 sudo[55772]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:33 compute-0 sudo[55924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsurehtfcxoypzjjyycjrvljeucyzjjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845953.6912475-105-64061362236718/AnsiballZ_ini_file.py'
Jan 31 07:52:33 compute-0 sudo[55924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:34 compute-0 python3.9[55926]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:52:34 compute-0 sudo[55924]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:34 compute-0 sudo[56076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dahlktjeflxzedvstiszguaipsmmwowe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845954.3426242-136-162133756235179/AnsiballZ_dnf.py'
Jan 31 07:52:34 compute-0 sudo[56076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:34 compute-0 python3.9[56078]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:52:36 compute-0 sshd-session[53710]: Connection closed by invalid user ssh 176.65.134.22 port 49708 [preauth]
Jan 31 07:52:36 compute-0 sudo[56076]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:36 compute-0 sudo[56230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkfvttxorguuzujhckommxiukcxmvcfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845956.598498-147-119763671512524/AnsiballZ_setup.py'
Jan 31 07:52:36 compute-0 sudo[56230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:37 compute-0 python3.9[56232]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:52:37 compute-0 sudo[56230]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:37 compute-0 sudo[56384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtkncpxqjncncauwumtccmyypryvuzxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845957.2988143-155-146177577532634/AnsiballZ_stat.py'
Jan 31 07:52:37 compute-0 sudo[56384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:37 compute-0 python3.9[56386]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:52:37 compute-0 sudo[56384]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:38 compute-0 sudo[56537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otywnrpiwliuysbvkqykopqgnrdbauwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845957.8990474-164-281250189546492/AnsiballZ_stat.py'
Jan 31 07:52:38 compute-0 sudo[56537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:38 compute-0 python3.9[56539]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:52:38 compute-0 sudo[56537]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:38 compute-0 sudo[56689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubgdunlpdbgojmdniwgraayusokenze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845958.498862-174-256318213082780/AnsiballZ_command.py'
Jan 31 07:52:38 compute-0 sudo[56689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:38 compute-0 python3.9[56691]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:52:38 compute-0 sudo[56689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:39 compute-0 sudo[56842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhprniuczudilewauelisaehwbugltij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845959.11656-184-25390287526891/AnsiballZ_service_facts.py'
Jan 31 07:52:39 compute-0 sudo[56842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:39 compute-0 python3.9[56844]: ansible-service_facts Invoked
Jan 31 07:52:39 compute-0 network[56861]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:52:39 compute-0 network[56862]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:52:39 compute-0 network[56863]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:52:41 compute-0 sudo[56842]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:42 compute-0 sudo[57146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpwsamojzmyjpwyddpezrjvicksnmybt ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769845962.185586-199-36343576525809/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769845962.185586-199-36343576525809/args'
Jan 31 07:52:42 compute-0 sudo[57146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:42 compute-0 sudo[57146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:42 compute-0 sudo[57313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omqvujafmxebsyhsoyvwwkewavxcrqkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845962.717182-210-39281395471628/AnsiballZ_dnf.py'
Jan 31 07:52:42 compute-0 sudo[57313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:43 compute-0 python3.9[57315]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:52:44 compute-0 sudo[57313]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:45 compute-0 sudo[57466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouxwfyxwltnjkvggbnhcjoplauwxzdah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845965.0700853-223-83639476384078/AnsiballZ_package_facts.py'
Jan 31 07:52:45 compute-0 sudo[57466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:45 compute-0 python3.9[57468]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 07:52:46 compute-0 sudo[57466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:46 compute-0 sudo[57618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxwpzmmpqtkeiktmgwbckrdmnbywztih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845966.5082107-233-152202152069860/AnsiballZ_stat.py'
Jan 31 07:52:46 compute-0 sudo[57618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:47 compute-0 python3.9[57620]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:52:47 compute-0 sudo[57618]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:47 compute-0 sudo[57743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkwvqqbbzegcirastuqeakoxtduhkyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845966.5082107-233-152202152069860/AnsiballZ_copy.py'
Jan 31 07:52:47 compute-0 sudo[57743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:47 compute-0 python3.9[57745]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845966.5082107-233-152202152069860/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:52:47 compute-0 sudo[57743]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:48 compute-0 sudo[57897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcicrfqkxzhkkdralwfdhstfzqhxwhwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845967.8394074-248-98447326803024/AnsiballZ_stat.py'
Jan 31 07:52:48 compute-0 sudo[57897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:48 compute-0 python3.9[57899]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:52:48 compute-0 sudo[57897]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:48 compute-0 sudo[58022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfibfvbhbntoghqstxdawvpvivytzxhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845967.8394074-248-98447326803024/AnsiballZ_copy.py'
Jan 31 07:52:48 compute-0 sudo[58022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:48 compute-0 sshd-session[56104]: Invalid user telnet from 176.65.134.22 port 42926
Jan 31 07:52:48 compute-0 python3.9[58024]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845967.8394074-248-98447326803024/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:52:48 compute-0 sudo[58022]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:49 compute-0 sudo[58176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hldmtwcxvjomugtfmpebubwyjqedfrez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845969.2313812-269-116931497218783/AnsiballZ_lineinfile.py'
Jan 31 07:52:49 compute-0 sudo[58176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:49 compute-0 python3.9[58178]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:52:49 compute-0 sudo[58176]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:50 compute-0 sshd-session[56104]: Connection closed by invalid user telnet 176.65.134.22 port 42926 [preauth]
Jan 31 07:52:50 compute-0 sudo[58331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdxdfbjqhluovgovqhmqvizjwsxrhyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845970.2611353-284-207657360730866/AnsiballZ_setup.py'
Jan 31 07:52:50 compute-0 sudo[58331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:50 compute-0 python3.9[58333]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:52:51 compute-0 sudo[58331]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:51 compute-0 sudo[58415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsfzbjofufhmmfrbpmwzjfmzweojltof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845970.2611353-284-207657360730866/AnsiballZ_systemd.py'
Jan 31 07:52:51 compute-0 sudo[58415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:51 compute-0 python3.9[58417]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:52:51 compute-0 sudo[58415]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:52 compute-0 sudo[58569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugkcaqzpjrcwmdoyakjilqhcrzpfnob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845972.383308-300-67015784895237/AnsiballZ_setup.py'
Jan 31 07:52:52 compute-0 sudo[58569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:52 compute-0 python3.9[58571]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:52:53 compute-0 sudo[58569]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:53 compute-0 sudo[58653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcjcuwaflzhkkdohdxmiabsvtyoxpkyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845972.383308-300-67015784895237/AnsiballZ_systemd.py'
Jan 31 07:52:53 compute-0 sudo[58653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:52:53 compute-0 sshd-session[58315]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:52:53 compute-0 sshd-session[58315]: Connection reset by 176.65.134.22 port 45654
Jan 31 07:52:53 compute-0 python3.9[58655]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:52:53 compute-0 chronyd[826]: chronyd exiting
Jan 31 07:52:53 compute-0 systemd[1]: Stopping NTP client/server...
Jan 31 07:52:53 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 07:52:53 compute-0 systemd[1]: Stopped NTP client/server.
Jan 31 07:52:53 compute-0 systemd[1]: Starting NTP client/server...
Jan 31 07:52:53 compute-0 chronyd[58665]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 07:52:53 compute-0 chronyd[58665]: Frequency -26.501 +/- 0.215 ppm read from /var/lib/chrony/drift
Jan 31 07:52:53 compute-0 chronyd[58665]: Loaded seccomp filter (level 2)
Jan 31 07:52:53 compute-0 systemd[1]: Started NTP client/server.
Jan 31 07:52:53 compute-0 sudo[58653]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:54 compute-0 sshd-session[53714]: Connection closed by 192.168.122.30 port 51944
Jan 31 07:52:54 compute-0 sshd-session[53711]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:52:54 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 07:52:54 compute-0 systemd[1]: session-12.scope: Consumed 23.050s CPU time.
Jan 31 07:52:54 compute-0 systemd-logind[810]: Session 12 logged out. Waiting for processes to exit.
Jan 31 07:52:54 compute-0 systemd-logind[810]: Removed session 12.
Jan 31 07:52:56 compute-0 sshd-session[58656]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:52:56 compute-0 sshd-session[58656]: Connection reset by 176.65.134.22 port 41016
Jan 31 07:52:59 compute-0 sshd-session[58691]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:52:59 compute-0 sshd-session[58691]: Connection reset by 176.65.134.22 port 52168
Jan 31 07:52:59 compute-0 sshd-session[58693]: Accepted publickey for zuul from 192.168.122.30 port 38288 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:52:59 compute-0 systemd-logind[810]: New session 13 of user zuul.
Jan 31 07:52:59 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 31 07:52:59 compute-0 sshd-session[58693]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:53:00 compute-0 sudo[58846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czubstwtnelgvjmjxwegfbzjtgjfitns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845980.0801253-17-254662566186139/AnsiballZ_file.py'
Jan 31 07:53:00 compute-0 sudo[58846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:00 compute-0 python3.9[58848]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:00 compute-0 sudo[58846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:01 compute-0 sudo[58999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgcydheztobqpkyvecchgwyxnrrqyevg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845980.9255073-29-279755037372079/AnsiballZ_stat.py'
Jan 31 07:53:01 compute-0 sudo[58999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:01 compute-0 python3.9[59001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:01 compute-0 sudo[58999]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:02 compute-0 sudo[59122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtaptzpkkcwgtmrekmqzcaokhjhnbcia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845980.9255073-29-279755037372079/AnsiballZ_copy.py'
Jan 31 07:53:02 compute-0 sudo[59122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:02 compute-0 python3.9[59124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845980.9255073-29-279755037372079/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:02 compute-0 sudo[59122]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:02 compute-0 sshd-session[58696]: Connection closed by 192.168.122.30 port 38288
Jan 31 07:53:02 compute-0 sshd-session[58693]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:53:02 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 07:53:02 compute-0 systemd[1]: session-13.scope: Consumed 1.477s CPU time.
Jan 31 07:53:02 compute-0 systemd-logind[810]: Session 13 logged out. Waiting for processes to exit.
Jan 31 07:53:02 compute-0 systemd-logind[810]: Removed session 13.
Jan 31 07:53:07 compute-0 sshd-session[59149]: Accepted publickey for zuul from 192.168.122.30 port 41620 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:53:07 compute-0 systemd-logind[810]: New session 14 of user zuul.
Jan 31 07:53:07 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 31 07:53:07 compute-0 sshd-session[59149]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:53:08 compute-0 python3.9[59302]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:53:09 compute-0 sudo[59456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sesflqsnkswirheznrjbhwankaxspeyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845989.320798-28-234563659588719/AnsiballZ_file.py'
Jan 31 07:53:09 compute-0 sudo[59456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:09 compute-0 python3.9[59458]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:10 compute-0 sudo[59456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:10 compute-0 sudo[59631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbedvedimptjrxqqhjjtufwyhmvzgobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845990.1394036-36-116243089901200/AnsiballZ_stat.py'
Jan 31 07:53:10 compute-0 sudo[59631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:10 compute-0 python3.9[59633]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:10 compute-0 sudo[59631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:11 compute-0 sudo[59754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhdpzrqayswsmkxgzwpqgldcqouxipik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845990.1394036-36-116243089901200/AnsiballZ_copy.py'
Jan 31 07:53:11 compute-0 sudo[59754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:11 compute-0 python3.9[59756]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769845990.1394036-36-116243089901200/.source.json _original_basename=.d0is0o6l follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:11 compute-0 sudo[59754]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:11 compute-0 sudo[59906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btksdkrlvgollczyrzbhnxwnbyqpyizg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845991.6542873-59-48231587307621/AnsiballZ_stat.py'
Jan 31 07:53:11 compute-0 sudo[59906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:12 compute-0 python3.9[59908]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:12 compute-0 sudo[59906]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:12 compute-0 sudo[60029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yttslvvbdfogvojzqrycnkhavknhsoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845991.6542873-59-48231587307621/AnsiballZ_copy.py'
Jan 31 07:53:12 compute-0 sudo[60029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:12 compute-0 python3.9[60031]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845991.6542873-59-48231587307621/.source _original_basename=.f0craszp follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:12 compute-0 sudo[60029]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:12 compute-0 sshd-session[58692]: Invalid user supervisor from 176.65.134.22 port 36100
Jan 31 07:53:13 compute-0 sudo[60181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwptrdxewpdwqbuevfqcywnssncwnfzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845992.79878-75-279920381056201/AnsiballZ_file.py'
Jan 31 07:53:13 compute-0 sudo[60181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:13 compute-0 python3.9[60183]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:53:13 compute-0 sudo[60181]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:13 compute-0 sudo[60333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cijoonndrgtioqddcddkzcyuvtdpxhqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845993.3926933-83-218252803516211/AnsiballZ_stat.py'
Jan 31 07:53:13 compute-0 sudo[60333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:13 compute-0 python3.9[60335]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:13 compute-0 sudo[60333]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:14 compute-0 sudo[60456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoywetljxwlasjgzbanmzddogyvcwcgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845993.3926933-83-218252803516211/AnsiballZ_copy.py'
Jan 31 07:53:14 compute-0 sudo[60456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:14 compute-0 python3.9[60458]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845993.3926933-83-218252803516211/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:53:14 compute-0 sudo[60456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:14 compute-0 sudo[60608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oabwzujuxrzilzwwlosetbdgnrglnjln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845994.4426343-83-222302427990138/AnsiballZ_stat.py'
Jan 31 07:53:14 compute-0 sudo[60608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:14 compute-0 python3.9[60610]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:14 compute-0 sshd-session[58692]: Connection closed by invalid user supervisor 176.65.134.22 port 36100 [preauth]
Jan 31 07:53:14 compute-0 sudo[60608]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:15 compute-0 sudo[60731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ealncgbuorvjqdzlinmxtwvxpouzkqop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845994.4426343-83-222302427990138/AnsiballZ_copy.py'
Jan 31 07:53:15 compute-0 sudo[60731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:15 compute-0 python3.9[60733]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845994.4426343-83-222302427990138/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:53:15 compute-0 sudo[60731]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:15 compute-0 sudo[60883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmciobmobeyjmldlfhewyzjczcowmzuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845995.5260603-112-114340714975431/AnsiballZ_file.py'
Jan 31 07:53:15 compute-0 sudo[60883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:15 compute-0 python3.9[60885]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:15 compute-0 sudo[60883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:16 compute-0 sudo[61035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkzrztoedfbzpeuwcdhilpiiscvtzket ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845996.0969324-120-226649469457207/AnsiballZ_stat.py'
Jan 31 07:53:16 compute-0 sudo[61035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:16 compute-0 python3.9[61037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:16 compute-0 sudo[61035]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:16 compute-0 sudo[61158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppwbzjedgrwebiuyewalkydrqxkbhszs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845996.0969324-120-226649469457207/AnsiballZ_copy.py'
Jan 31 07:53:16 compute-0 sudo[61158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:16 compute-0 python3.9[61160]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769845996.0969324-120-226649469457207/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:16 compute-0 sudo[61158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:17 compute-0 sudo[61310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbgiestytjkvlcgcmttwhtjeefxgrnxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845997.0397294-135-54743755078643/AnsiballZ_stat.py'
Jan 31 07:53:17 compute-0 sudo[61310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:17 compute-0 python3.9[61312]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:17 compute-0 sudo[61310]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:17 compute-0 sudo[61433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxiuzxtfhqeqdmklhgozdfpxegcwmaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845997.0397294-135-54743755078643/AnsiballZ_copy.py'
Jan 31 07:53:17 compute-0 sudo[61433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:17 compute-0 python3.9[61435]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769845997.0397294-135-54743755078643/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:17 compute-0 sudo[61433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:18 compute-0 sudo[61585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvqfghltpxkabpevltqhbkgogvymtdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845998.0679448-150-264236111444598/AnsiballZ_systemd.py'
Jan 31 07:53:18 compute-0 sudo[61585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:18 compute-0 python3.9[61587]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:53:18 compute-0 systemd[1]: Reloading.
Jan 31 07:53:18 compute-0 systemd-rc-local-generator[61611]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:53:18 compute-0 systemd-sysv-generator[61615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:53:19 compute-0 systemd[1]: Reloading.
Jan 31 07:53:19 compute-0 systemd-sysv-generator[61653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:53:19 compute-0 systemd-rc-local-generator[61650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:53:19 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 07:53:19 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 07:53:19 compute-0 sudo[61585]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:19 compute-0 sudo[61813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjdryhmpwbpzqkerlwparomhuksjhinz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845999.4234037-158-281181745033604/AnsiballZ_stat.py'
Jan 31 07:53:19 compute-0 sudo[61813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:19 compute-0 python3.9[61815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:19 compute-0 sudo[61813]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:20 compute-0 sudo[61936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmseqvhpuqfsvllntbtzhmrlasyihxaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769845999.4234037-158-281181745033604/AnsiballZ_copy.py'
Jan 31 07:53:20 compute-0 sudo[61936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:20 compute-0 python3.9[61938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769845999.4234037-158-281181745033604/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:20 compute-0 sudo[61936]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:20 compute-0 sudo[62088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmrhqpbpmhyjrheygelebukoxdvsofvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846000.4486358-173-33683874000540/AnsiballZ_stat.py'
Jan 31 07:53:20 compute-0 sudo[62088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:20 compute-0 python3.9[62090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:20 compute-0 sudo[62088]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:21 compute-0 sudo[62211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhlxsmelznsvvvaqpwoyqanozexxlrno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846000.4486358-173-33683874000540/AnsiballZ_copy.py'
Jan 31 07:53:21 compute-0 sudo[62211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:21 compute-0 python3.9[62213]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846000.4486358-173-33683874000540/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:21 compute-0 sudo[62211]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:21 compute-0 sudo[62363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prnbjosqeijsrdvpcwuehtyudzybgaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846001.5855074-188-130899652743754/AnsiballZ_systemd.py'
Jan 31 07:53:21 compute-0 sudo[62363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:22 compute-0 python3.9[62365]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:53:22 compute-0 systemd[1]: Reloading.
Jan 31 07:53:22 compute-0 systemd-sysv-generator[62392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:53:22 compute-0 systemd-rc-local-generator[62389]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:53:22 compute-0 systemd[1]: Reloading.
Jan 31 07:53:22 compute-0 systemd-rc-local-generator[62427]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:53:22 compute-0 systemd-sysv-generator[62430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:53:22 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 07:53:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:53:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:53:22 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 07:53:22 compute-0 sudo[62363]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:23 compute-0 python3.9[62591]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:53:23 compute-0 network[62608]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:53:23 compute-0 network[62609]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:53:23 compute-0 network[62610]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:53:27 compute-0 sudo[62870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuealxudtpnpenuljemshvcilcazkyys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846007.5866046-204-12750536251139/AnsiballZ_systemd.py'
Jan 31 07:53:27 compute-0 sudo[62870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:28 compute-0 python3.9[62872]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:53:28 compute-0 systemd[1]: Reloading.
Jan 31 07:53:28 compute-0 systemd-sysv-generator[62905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:53:28 compute-0 systemd-rc-local-generator[62897]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:53:28 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 07:53:28 compute-0 iptables.init[62912]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 07:53:28 compute-0 iptables.init[62912]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 07:53:28 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 07:53:28 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 07:53:28 compute-0 sudo[62870]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:29 compute-0 sudo[63106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjaufdxpimxugfspwkmgxqyqstvgxpvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846008.9189422-204-96866106302279/AnsiballZ_systemd.py'
Jan 31 07:53:29 compute-0 sudo[63106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:29 compute-0 python3.9[63108]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:53:29 compute-0 sudo[63106]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:29 compute-0 sudo[63260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhyfamvprnsgqrkxkhonswiafvrnjkwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846009.7115476-220-84715387814282/AnsiballZ_systemd.py'
Jan 31 07:53:29 compute-0 sudo[63260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:30 compute-0 python3.9[63262]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:53:30 compute-0 systemd[1]: Reloading.
Jan 31 07:53:30 compute-0 systemd-rc-local-generator[63288]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:53:30 compute-0 systemd-sysv-generator[63292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:53:30 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 31 07:53:30 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 31 07:53:30 compute-0 sudo[63260]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:31 compute-0 sudo[63452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylbejjiatdzejchzgiveykhysjlxfqdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846010.7490263-228-244567290672016/AnsiballZ_command.py'
Jan 31 07:53:31 compute-0 sudo[63452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:31 compute-0 python3.9[63454]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:53:31 compute-0 sudo[63452]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:32 compute-0 sudo[63605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osqrwhotqgmyhegivrsvmgdbbegjqrrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846011.7588725-242-63282000441650/AnsiballZ_stat.py'
Jan 31 07:53:32 compute-0 sudo[63605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:32 compute-0 python3.9[63607]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:32 compute-0 sudo[63605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:32 compute-0 sudo[63730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzqailycyxkbgpaxkcvporzaixhamztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846011.7588725-242-63282000441650/AnsiballZ_copy.py'
Jan 31 07:53:32 compute-0 sudo[63730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:32 compute-0 python3.9[63732]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846011.7588725-242-63282000441650/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:32 compute-0 sudo[63730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:33 compute-0 sudo[63883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opopdzosmiuuklslmkwialdgogknboln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846012.9183064-257-107580950369842/AnsiballZ_systemd.py'
Jan 31 07:53:33 compute-0 sudo[63883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:33 compute-0 python3.9[63885]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:53:33 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 07:53:33 compute-0 sshd[1002]: Received SIGHUP; restarting.
Jan 31 07:53:33 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 07:53:33 compute-0 sshd[1002]: Server listening on 0.0.0.0 port 22.
Jan 31 07:53:33 compute-0 sshd[1002]: Server listening on :: port 22.
Jan 31 07:53:33 compute-0 sudo[63883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:33 compute-0 sudo[64039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxzckpfmgxtrtnztongqnmxhrhlkyvbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846013.6019206-265-24945416885697/AnsiballZ_file.py'
Jan 31 07:53:33 compute-0 sudo[64039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:34 compute-0 python3.9[64041]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:34 compute-0 sudo[64039]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:34 compute-0 sudo[64191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsmaiuvgjxdqzaqtnjmverbuwskobjya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846014.1536043-273-164004750847076/AnsiballZ_stat.py'
Jan 31 07:53:34 compute-0 sudo[64191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:34 compute-0 python3.9[64193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:34 compute-0 sudo[64191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:34 compute-0 sudo[64314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbmlltvxhmhfpujjqivjrdisxglmkid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846014.1536043-273-164004750847076/AnsiballZ_copy.py'
Jan 31 07:53:34 compute-0 sudo[64314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:35 compute-0 python3.9[64316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846014.1536043-273-164004750847076/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:35 compute-0 sudo[64314]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:35 compute-0 sudo[64466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeweigvgbtquvfipifpdjivykpzptnwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846015.3674955-291-223117602836992/AnsiballZ_timezone.py'
Jan 31 07:53:35 compute-0 sudo[64466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:35 compute-0 python3.9[64468]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 07:53:36 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 07:53:36 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 07:53:36 compute-0 sudo[64466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:36 compute-0 sudo[64622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mafwtyhuetuijzhreeitbwjtnqtnafij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846016.3122907-300-200167940834469/AnsiballZ_file.py'
Jan 31 07:53:36 compute-0 sudo[64622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:36 compute-0 python3.9[64624]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:36 compute-0 sudo[64622]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:37 compute-0 sudo[64774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdwskivcmpqvmbjsjqdszpafndkrehrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846016.8974261-308-67583427849119/AnsiballZ_stat.py'
Jan 31 07:53:37 compute-0 sudo[64774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:37 compute-0 python3.9[64776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:37 compute-0 sudo[64774]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:37 compute-0 sudo[64897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpsldrkdkamzukrumatsdonwjycxaqok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846016.8974261-308-67583427849119/AnsiballZ_copy.py'
Jan 31 07:53:37 compute-0 sudo[64897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:37 compute-0 python3.9[64899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846016.8974261-308-67583427849119/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:37 compute-0 sudo[64897]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:38 compute-0 sudo[65049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybfmppsxcezjnbouapqlwxccszptvsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846017.9663572-323-161822106029273/AnsiballZ_stat.py'
Jan 31 07:53:38 compute-0 sudo[65049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:38 compute-0 python3.9[65051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:38 compute-0 sudo[65049]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:38 compute-0 sudo[65172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgilzgarkznoqivdmzutsivqulsadxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846017.9663572-323-161822106029273/AnsiballZ_copy.py'
Jan 31 07:53:38 compute-0 sudo[65172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:38 compute-0 python3.9[65174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846017.9663572-323-161822106029273/.source.yaml _original_basename=.4i7ka3p3 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:38 compute-0 sudo[65172]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:39 compute-0 sudo[65324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysmfyowqpckthpwvynckiogusyolgnej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846019.0243056-338-212044653851132/AnsiballZ_stat.py'
Jan 31 07:53:39 compute-0 sudo[65324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:39 compute-0 python3.9[65326]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:39 compute-0 sudo[65324]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:39 compute-0 sudo[65447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcktblliwnryepqkfmvghujrrtzcpkyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846019.0243056-338-212044653851132/AnsiballZ_copy.py'
Jan 31 07:53:39 compute-0 sudo[65447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:39 compute-0 python3.9[65449]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846019.0243056-338-212044653851132/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:39 compute-0 sudo[65447]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:40 compute-0 sudo[65599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huysmjybpvraqlbkmnotjdosibhzwdvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846020.131661-353-261467370966262/AnsiballZ_command.py'
Jan 31 07:53:40 compute-0 sudo[65599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:40 compute-0 python3.9[65601]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:53:40 compute-0 sudo[65599]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:41 compute-0 sudo[65752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzjnnktwwkateyykavzmrdtqkpwnfatl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846020.7371702-361-184825392640304/AnsiballZ_command.py'
Jan 31 07:53:41 compute-0 sudo[65752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:41 compute-0 python3.9[65754]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:53:41 compute-0 sudo[65752]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:41 compute-0 sudo[65905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkkbmszruqrocrlfxczoqxxofieqzdbd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846021.3388674-369-176789315814416/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:53:41 compute-0 sudo[65905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:41 compute-0 python3[65907]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:53:41 compute-0 sudo[65905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:42 compute-0 sudo[66057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dftpyptqupetlcnnefqhsotomhryhdal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846022.1352534-377-63878524700377/AnsiballZ_stat.py'
Jan 31 07:53:42 compute-0 sudo[66057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:42 compute-0 python3.9[66059]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:42 compute-0 sudo[66057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:42 compute-0 sudo[66180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqatdgwzarbjcdpppafnyrpfpuaubeuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846022.1352534-377-63878524700377/AnsiballZ_copy.py'
Jan 31 07:53:42 compute-0 sudo[66180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:43 compute-0 python3.9[66182]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846022.1352534-377-63878524700377/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:43 compute-0 sudo[66180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:43 compute-0 sudo[66332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zguppqrqmzselzydgbpsvtbsqdhsqtfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846023.281578-392-104076080861193/AnsiballZ_stat.py'
Jan 31 07:53:43 compute-0 sudo[66332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:43 compute-0 python3.9[66334]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:43 compute-0 sudo[66332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:43 compute-0 sudo[66455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uycoauuxlqqojcnvpcvmzkgjyrbokpvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846023.281578-392-104076080861193/AnsiballZ_copy.py'
Jan 31 07:53:43 compute-0 sudo[66455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:44 compute-0 python3.9[66457]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846023.281578-392-104076080861193/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:44 compute-0 sudo[66455]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:44 compute-0 sudo[66607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmzkdrlpmojbujwmzetcsgnjgxlspbpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846024.2334957-407-97664668075181/AnsiballZ_stat.py'
Jan 31 07:53:44 compute-0 sudo[66607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:44 compute-0 python3.9[66609]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:44 compute-0 sudo[66607]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:44 compute-0 sudo[66730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwcxrtqxshojipsyhewaageezbesojhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846024.2334957-407-97664668075181/AnsiballZ_copy.py'
Jan 31 07:53:44 compute-0 sudo[66730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:45 compute-0 python3.9[66732]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846024.2334957-407-97664668075181/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:45 compute-0 sudo[66730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:45 compute-0 sudo[66882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjbitgexvvijtpndtaziuzmvjcrugsmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846025.2007322-422-134084480089092/AnsiballZ_stat.py'
Jan 31 07:53:45 compute-0 sudo[66882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:45 compute-0 python3.9[66884]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:45 compute-0 sudo[66882]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:45 compute-0 sudo[67005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmfmrcobnclntlaowdgjwxgbzphkqdpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846025.2007322-422-134084480089092/AnsiballZ_copy.py'
Jan 31 07:53:45 compute-0 sudo[67005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:46 compute-0 python3.9[67007]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846025.2007322-422-134084480089092/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:46 compute-0 sudo[67005]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:46 compute-0 sudo[67157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfqhwgeofkfrnehlfmfysopgactbskfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846026.2047825-437-62892447905041/AnsiballZ_stat.py'
Jan 31 07:53:46 compute-0 sudo[67157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:46 compute-0 python3.9[67159]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:53:46 compute-0 sudo[67157]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:46 compute-0 sudo[67280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwcqelamtyynpeefwpulcjjwzmxvrsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846026.2047825-437-62892447905041/AnsiballZ_copy.py'
Jan 31 07:53:46 compute-0 sudo[67280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:47 compute-0 python3.9[67282]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846026.2047825-437-62892447905041/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:47 compute-0 sudo[67280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:47 compute-0 sudo[67432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdllcktagruidxtygdttflkznbqwykqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846027.2852852-452-120190198232537/AnsiballZ_file.py'
Jan 31 07:53:47 compute-0 sudo[67432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:47 compute-0 python3.9[67434]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:47 compute-0 sudo[67432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:48 compute-0 sudo[67584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nivhiofnahoareckjwhfubmlgjtzhwst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846027.8770971-460-41152646588/AnsiballZ_command.py'
Jan 31 07:53:48 compute-0 sudo[67584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:48 compute-0 python3.9[67586]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:53:48 compute-0 sudo[67584]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:48 compute-0 sudo[67743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrjvebzeojubycnngwzhccxhugzwnzsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846028.453023-468-8445214424864/AnsiballZ_blockinfile.py'
Jan 31 07:53:48 compute-0 sudo[67743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:49 compute-0 python3.9[67745]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:49 compute-0 sudo[67743]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:49 compute-0 sudo[67896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnagtekmrtsmivdbdxbmlqoxnkbyxxze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846029.2902694-477-211992381581740/AnsiballZ_file.py'
Jan 31 07:53:49 compute-0 sudo[67896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:49 compute-0 python3.9[67898]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:49 compute-0 sudo[67896]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:50 compute-0 sudo[68048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhysbpdfodgzhumrzcpjifyyxjqvyawu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846029.9085338-477-164719448369280/AnsiballZ_file.py'
Jan 31 07:53:50 compute-0 sudo[68048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:50 compute-0 python3.9[68050]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:53:50 compute-0 sudo[68048]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:50 compute-0 sudo[68200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmssjpoxflxunjhhymgxocqlyemsmge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846030.4724145-492-105712775104868/AnsiballZ_mount.py'
Jan 31 07:53:50 compute-0 sudo[68200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:51 compute-0 python3.9[68202]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:53:51 compute-0 sudo[68200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:51 compute-0 sudo[68353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxrelhukorgrrmuhsponeihkfnwuznpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846031.306527-492-232365198771200/AnsiballZ_mount.py'
Jan 31 07:53:51 compute-0 sudo[68353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:53:51 compute-0 python3.9[68355]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:53:51 compute-0 sudo[68353]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:52 compute-0 sshd-session[59152]: Connection closed by 192.168.122.30 port 41620
Jan 31 07:53:52 compute-0 sshd-session[59149]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:53:52 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 07:53:52 compute-0 systemd[1]: session-14.scope: Consumed 30.557s CPU time.
Jan 31 07:53:52 compute-0 systemd-logind[810]: Session 14 logged out. Waiting for processes to exit.
Jan 31 07:53:52 compute-0 systemd-logind[810]: Removed session 14.
Jan 31 07:53:59 compute-0 sshd-session[68381]: Accepted publickey for zuul from 192.168.122.30 port 34206 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:53:59 compute-0 systemd-logind[810]: New session 15 of user zuul.
Jan 31 07:53:59 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 31 07:53:59 compute-0 sshd-session[68381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:54:00 compute-0 sudo[68534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spslxhsxdwvkdignahfymltybojtpehx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846039.8948102-16-161533576066326/AnsiballZ_tempfile.py'
Jan 31 07:54:00 compute-0 sudo[68534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:00 compute-0 python3.9[68536]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 07:54:00 compute-0 sudo[68534]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:01 compute-0 sudo[68686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxzlhjplvtvaggtjorjaspckzenstnkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846040.7344933-28-7491175851526/AnsiballZ_stat.py'
Jan 31 07:54:01 compute-0 sudo[68686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:01 compute-0 python3.9[68688]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:54:01 compute-0 sudo[68686]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:02 compute-0 sudo[68838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipndqrbaxiamujtjgjucwskosrbmiael ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846041.524029-38-115146100700103/AnsiballZ_setup.py'
Jan 31 07:54:02 compute-0 sudo[68838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:02 compute-0 python3.9[68840]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:54:02 compute-0 sudo[68838]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:03 compute-0 sudo[68990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhntdyihopnvqqnvutfxjjcwkxzxsrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846042.782597-47-264961919909817/AnsiballZ_blockinfile.py'
Jan 31 07:54:03 compute-0 sudo[68990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:03 compute-0 python3.9[68992]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSi7g2ictuN272qiLsoojfDgx9lVVeboCWir6rHMCvPas/6btnjBiRTJVqkKZZ4eYOzP+Weh/EzuT+JxHSkyL/+Ny46rPtucKgaliFZHkmYaXkqXDO2hgUREKT1GuGQzwsjZJ1vHputMWP5ScgRg8J5Fb7dOqFetCw+XKlYgSQEES479PDCn07JxC31a98csniIau6S9yA9XKG+kVD+Nh4mnhcFE10YkGvVhoSIZMPwKKaBQUUzJLRIbp7316V+klNshXsetD99gfhEdoWDdH/1ew4fStSfYMA7SX12zAIZhr++IDXVfWwMvf9bF24wE5nbmpAB3ro7wS+zw8BdWd7dNZXCVyjQcGNA08B0H8pO5anFxBjj5yHx/tMOsluEXf04mIitZyxRxeiizNAXRiskLQQTYpSEgQ6JcbyoCc+9WkV/6rIsaxIefHqJty7/8m5wH0FAV4pXkiySzNGqYibMmGqXYp0L7Z5/pYCyeNpsMQZsEFfJwr8C4SvpNV5fBk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKUyx/kEdFEReRk/h5tefV1FGVtIeEqlJ58UerPMBWbi
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPZxv8WmwcRYDh1TZXwppWAC6GeAYeABCRxKbXZ28nbPzHV8jfXeqxH3V0Cwj8EIISR/dBVdlUDrj3cyaqb+iZk=
                                             create=True mode=0644 path=/tmp/ansible.7zovy2rc state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:54:03 compute-0 sudo[68990]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:03 compute-0 sudo[69142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywluxpqllrelpcbnuhsactwrlgkidwom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846043.5394857-55-184289539270861/AnsiballZ_command.py'
Jan 31 07:54:03 compute-0 sudo[69142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:04 compute-0 python3.9[69144]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.7zovy2rc' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:54:04 compute-0 sudo[69142]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:04 compute-0 sudo[69296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxkwwfcdnzwtsiblrcjzdbhsoxluuxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846044.295014-63-214711945785000/AnsiballZ_file.py'
Jan 31 07:54:04 compute-0 sudo[69296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:04 compute-0 python3.9[69298]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.7zovy2rc state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:54:04 compute-0 sudo[69296]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:05 compute-0 sshd-session[68384]: Connection closed by 192.168.122.30 port 34206
Jan 31 07:54:05 compute-0 sshd-session[68381]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:54:05 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 07:54:05 compute-0 systemd[1]: session-15.scope: Consumed 3.005s CPU time.
Jan 31 07:54:05 compute-0 systemd-logind[810]: Session 15 logged out. Waiting for processes to exit.
Jan 31 07:54:05 compute-0 systemd-logind[810]: Removed session 15.
Jan 31 07:54:06 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 07:54:17 compute-0 sshd-session[69325]: Accepted publickey for zuul from 192.168.122.30 port 37038 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:54:17 compute-0 systemd-logind[810]: New session 16 of user zuul.
Jan 31 07:54:17 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 31 07:54:17 compute-0 sshd-session[69325]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:54:18 compute-0 python3.9[69478]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:54:19 compute-0 sudo[69632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mawkbedexvmvmtzjunbwiywexzocokso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846059.1739619-27-278295113775327/AnsiballZ_systemd.py'
Jan 31 07:54:19 compute-0 sudo[69632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:19 compute-0 python3.9[69634]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:54:20 compute-0 sudo[69632]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:20 compute-0 sudo[69786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xepwkcnfncxfupxhttjvxzetajvkcouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846060.1889422-35-212784838218612/AnsiballZ_systemd.py'
Jan 31 07:54:20 compute-0 sudo[69786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:20 compute-0 python3.9[69788]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:54:20 compute-0 sudo[69786]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:21 compute-0 sudo[69939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujjsfgqaoexsmtlmbofsntmvnplmgpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846060.9806132-44-66879491295121/AnsiballZ_command.py'
Jan 31 07:54:21 compute-0 sudo[69939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:21 compute-0 python3.9[69941]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:54:21 compute-0 sudo[69939]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:22 compute-0 sudo[70092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwxmcgaflmxlzsmsjafhtuitzivnvny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846061.7716775-52-190462848864420/AnsiballZ_stat.py'
Jan 31 07:54:22 compute-0 sudo[70092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:22 compute-0 python3.9[70094]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:54:22 compute-0 sudo[70092]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:22 compute-0 sudo[70246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfpzekwrcrvhbelxacwqfcukikxgwech ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846062.6734593-60-202031485930476/AnsiballZ_command.py'
Jan 31 07:54:22 compute-0 sudo[70246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:23 compute-0 python3.9[70248]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:54:23 compute-0 sudo[70246]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:24 compute-0 sudo[70401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdwmdhysjdpeqsdarournvewcgoahyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846063.749217-68-44246571706364/AnsiballZ_file.py'
Jan 31 07:54:24 compute-0 sudo[70401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:24 compute-0 python3.9[70403]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:54:24 compute-0 sudo[70401]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:24 compute-0 sshd-session[69328]: Connection closed by 192.168.122.30 port 37038
Jan 31 07:54:24 compute-0 sshd-session[69325]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:54:24 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 07:54:24 compute-0 systemd[1]: session-16.scope: Consumed 4.101s CPU time.
Jan 31 07:54:24 compute-0 systemd-logind[810]: Session 16 logged out. Waiting for processes to exit.
Jan 31 07:54:24 compute-0 systemd-logind[810]: Removed session 16.
Jan 31 07:54:34 compute-0 sshd-session[70428]: Accepted publickey for zuul from 192.168.122.30 port 58776 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 07:54:34 compute-0 systemd-logind[810]: New session 17 of user zuul.
Jan 31 07:54:34 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 31 07:54:34 compute-0 sshd-session[70428]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:54:35 compute-0 python3.9[70581]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:54:36 compute-0 sudo[70735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wousksqlgoqoasccsgegvnjzjfkjbdpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846075.8034954-29-230242804956949/AnsiballZ_setup.py'
Jan 31 07:54:36 compute-0 sudo[70735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:36 compute-0 python3.9[70737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:54:36 compute-0 sudo[70735]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:37 compute-0 sudo[70819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnqhjnmwgqnaduxdlqniacupottfhskf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846075.8034954-29-230242804956949/AnsiballZ_dnf.py'
Jan 31 07:54:37 compute-0 sudo[70819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:37 compute-0 python3.9[70821]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:54:38 compute-0 sudo[70819]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:39 compute-0 python3.9[70972]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:54:40 compute-0 python3.9[71123]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:54:41 compute-0 python3.9[71273]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:54:41 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:54:42 compute-0 python3.9[71424]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:54:42 compute-0 sshd-session[70431]: Connection closed by 192.168.122.30 port 58776
Jan 31 07:54:42 compute-0 sshd-session[70428]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:54:42 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 07:54:42 compute-0 systemd[1]: session-17.scope: Consumed 5.567s CPU time.
Jan 31 07:54:42 compute-0 systemd-logind[810]: Session 17 logged out. Waiting for processes to exit.
Jan 31 07:54:42 compute-0 systemd-logind[810]: Removed session 17.
Jan 31 07:54:51 compute-0 sshd-session[71449]: Accepted publickey for zuul from 38.102.83.129 port 58200 ssh2: RSA SHA256:7fpkPihK+1pYJj229Mqe0V6aalzFoVGtAbEqTCFuZew
Jan 31 07:54:51 compute-0 systemd-logind[810]: New session 18 of user zuul.
Jan 31 07:54:51 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 31 07:54:51 compute-0 sshd-session[71449]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:54:51 compute-0 sudo[71525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgtbncigwbaydturnuuxwsngxscauzqk ; /usr/bin/python3'
Jan 31 07:54:51 compute-0 sudo[71525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:52 compute-0 useradd[71529]: new group: name=ceph-admin, GID=42478
Jan 31 07:54:52 compute-0 useradd[71529]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 31 07:54:52 compute-0 sudo[71525]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:52 compute-0 sudo[71611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtsopphznfwluektorwtdtouhotvusly ; /usr/bin/python3'
Jan 31 07:54:52 compute-0 sudo[71611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:52 compute-0 sudo[71611]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:53 compute-0 sudo[71684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbtoqnyuaegjhiudkeqaukwovejmhvak ; /usr/bin/python3'
Jan 31 07:54:53 compute-0 sudo[71684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:53 compute-0 sudo[71684]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:53 compute-0 sudo[71734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjhdsuckvlrxtjnnvqlyfitmbookqpsh ; /usr/bin/python3'
Jan 31 07:54:53 compute-0 sudo[71734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:53 compute-0 sudo[71734]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:54 compute-0 sudo[71760]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ughhnizqghwvsboeywwanjtucbjvigsu ; /usr/bin/python3'
Jan 31 07:54:54 compute-0 sudo[71760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:54 compute-0 sudo[71760]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:54 compute-0 sudo[71786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnovbwliefzqvoekaaewmbqhuvqgdcq ; /usr/bin/python3'
Jan 31 07:54:54 compute-0 sudo[71786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:54 compute-0 sudo[71786]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:54 compute-0 sudo[71812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiyedrtjpjmkurtvzmeicvgqtdwfshgv ; /usr/bin/python3'
Jan 31 07:54:54 compute-0 sudo[71812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:55 compute-0 sudo[71812]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:55 compute-0 sudo[71890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrqgevbyjlssatvtufkujtyvtgglhqyu ; /usr/bin/python3'
Jan 31 07:54:55 compute-0 sudo[71890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:55 compute-0 sudo[71890]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:55 compute-0 sudo[71963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsqclxtughgoujrwbnazafqrafwvytps ; /usr/bin/python3'
Jan 31 07:54:55 compute-0 sudo[71963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:56 compute-0 sudo[71963]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:56 compute-0 sudo[72065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrhefmbywpgdvsstfroiusmqinsgpnk ; /usr/bin/python3'
Jan 31 07:54:56 compute-0 sudo[72065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:56 compute-0 sudo[72065]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:56 compute-0 sudo[72138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpxawbkyotzsnhgkcjvtahbgrwnrhltv ; /usr/bin/python3'
Jan 31 07:54:56 compute-0 sudo[72138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:56 compute-0 sudo[72138]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:57 compute-0 sudo[72188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceromemnphbvhbkmkxftjrqtjxbnyhkc ; /usr/bin/python3'
Jan 31 07:54:57 compute-0 sudo[72188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:57 compute-0 python3[72190]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:54:58 compute-0 sudo[72188]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:58 compute-0 sudo[72283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssvoukuxlubhaspoebuyljesyjycbsdi ; /usr/bin/python3'
Jan 31 07:54:58 compute-0 sudo[72283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:54:59 compute-0 python3[72285]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:55:00 compute-0 sudo[72283]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:00 compute-0 sudo[72310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjdjxikuzymxtxzmwimurxoralshtyi ; /usr/bin/python3'
Jan 31 07:55:00 compute-0 sudo[72310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:00 compute-0 python3[72312]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:00 compute-0 sudo[72310]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:00 compute-0 sudo[72336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xafugdwbzxgowlxytzgamoygvvxttmlw ; /usr/bin/python3'
Jan 31 07:55:00 compute-0 sudo[72336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:00 compute-0 python3[72338]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:00 compute-0 kernel: loop: module loaded
Jan 31 07:55:00 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 31 07:55:00 compute-0 sudo[72336]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:01 compute-0 sudo[72371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfjyzkzwvjvagffhixbgaggygjbqpxfk ; /usr/bin/python3'
Jan 31 07:55:01 compute-0 sudo[72371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:01 compute-0 python3[72373]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:01 compute-0 lvm[72376]: PV /dev/loop3 not used.
Jan 31 07:55:01 compute-0 lvm[72378]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:55:01 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 07:55:01 compute-0 lvm[72384]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 07:55:01 compute-0 lvm[72388]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:55:01 compute-0 lvm[72388]: VG ceph_vg0 finished
Jan 31 07:55:01 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 07:55:01 compute-0 sudo[72371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:01 compute-0 sudo[72464]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvfmscozxcsnkknutmplubdmkjgrxtf ; /usr/bin/python3'
Jan 31 07:55:01 compute-0 sudo[72464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:02 compute-0 python3[72466]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:55:02 compute-0 sudo[72464]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:02 compute-0 sudo[72537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eftzgpbwnptfqrdwhrnvxtucjqwrwysb ; /usr/bin/python3'
Jan 31 07:55:02 compute-0 sudo[72537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:02 compute-0 python3[72539]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846101.7741375-36517-204854030679799/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:55:02 compute-0 sudo[72537]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:02 compute-0 sudo[72587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elkpotbmyqtdognsghyvuxohwchvomio ; /usr/bin/python3'
Jan 31 07:55:02 compute-0 sudo[72587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:03 compute-0 python3[72589]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:55:03 compute-0 systemd[1]: Reloading.
Jan 31 07:55:03 compute-0 systemd-rc-local-generator[72609]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:55:03 compute-0 systemd-sysv-generator[72618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:55:03 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 07:55:03 compute-0 bash[72629]: /dev/loop3: [64513]:4355916 (/var/lib/ceph-osd-0.img)
Jan 31 07:55:03 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 07:55:03 compute-0 lvm[72630]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:55:03 compute-0 lvm[72630]: VG ceph_vg0 finished
Jan 31 07:55:03 compute-0 sudo[72587]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:03 compute-0 sudo[72654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhhxkmxsxmojsmqtgwonomdgknldbbjd ; /usr/bin/python3'
Jan 31 07:55:03 compute-0 sudo[72654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:03 compute-0 python3[72656]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:55:04 compute-0 chronyd[58665]: Selected source 67.205.162.81 (pool.ntp.org)
Jan 31 07:55:04 compute-0 sudo[72654]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:04 compute-0 sudo[72681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjfrtzpaddljufzcltdtczvrofijdhsn ; /usr/bin/python3'
Jan 31 07:55:05 compute-0 sudo[72681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:05 compute-0 python3[72683]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:05 compute-0 sudo[72681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:05 compute-0 sudo[72707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwlqiypmwcwzhlioibgppaspwtweyfkh ; /usr/bin/python3'
Jan 31 07:55:05 compute-0 sudo[72707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:05 compute-0 python3[72709]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:05 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 31 07:55:05 compute-0 sudo[72707]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:05 compute-0 sudo[72739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnvvdvqhjrietmzvdetsgzesxuilpolf ; /usr/bin/python3'
Jan 31 07:55:05 compute-0 sudo[72739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:05 compute-0 python3[72741]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:05 compute-0 lvm[72744]: PV /dev/loop4 not used.
Jan 31 07:55:06 compute-0 lvm[72746]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:55:06 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 31 07:55:06 compute-0 lvm[72748]:   0 logical volume(s) in volume group "ceph_vg1" now active
Jan 31 07:55:06 compute-0 lvm[72749]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:55:06 compute-0 lvm[72749]: VG ceph_vg1 finished
Jan 31 07:55:06 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 31 07:55:06 compute-0 lvm[72758]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:55:06 compute-0 lvm[72758]: VG ceph_vg1 finished
Jan 31 07:55:06 compute-0 sudo[72739]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:07 compute-0 sudo[72834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvhrdcazmrjeboalgaccgrgwiloucujv ; /usr/bin/python3'
Jan 31 07:55:07 compute-0 sudo[72834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:07 compute-0 python3[72836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:55:07 compute-0 sudo[72834]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:07 compute-0 sudo[72907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noyfblvqanuqsrkckpmcnozkducdpcym ; /usr/bin/python3'
Jan 31 07:55:07 compute-0 sudo[72907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:08 compute-0 python3[72909]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846107.0898948-36544-134473774036436/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:55:08 compute-0 sudo[72907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:08 compute-0 sudo[72957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uccorvluinjxsarpuxabditkdglidmfh ; /usr/bin/python3'
Jan 31 07:55:08 compute-0 sudo[72957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:08 compute-0 python3[72959]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:55:08 compute-0 systemd[1]: Reloading.
Jan 31 07:55:08 compute-0 systemd-rc-local-generator[72984]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:55:08 compute-0 systemd-sysv-generator[72990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:55:09 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 07:55:09 compute-0 bash[72999]: /dev/loop4: [64513]:4355918 (/var/lib/ceph-osd-1.img)
Jan 31 07:55:09 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 07:55:09 compute-0 sudo[72957]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:09 compute-0 lvm[73001]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:55:09 compute-0 lvm[73001]: VG ceph_vg1 finished
Jan 31 07:55:09 compute-0 sudo[73025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shepdinmlgmdpanuayvdhynkjlgeetyi ; /usr/bin/python3'
Jan 31 07:55:09 compute-0 sudo[73025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:09 compute-0 python3[73027]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:55:10 compute-0 sudo[73025]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:10 compute-0 sudo[73052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnjceajnggehfrtrkxzdlfkorvflaxjp ; /usr/bin/python3'
Jan 31 07:55:10 compute-0 sudo[73052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:10 compute-0 python3[73054]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:10 compute-0 sudo[73052]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:11 compute-0 sudo[73078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzxppdiexawpoqehticlyhyehrnijcsk ; /usr/bin/python3'
Jan 31 07:55:11 compute-0 sudo[73078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:11 compute-0 python3[73080]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:11 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 31 07:55:11 compute-0 sudo[73078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:11 compute-0 sudo[73109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbjlgmiyodovbtnejbextdgoaxseugkq ; /usr/bin/python3'
Jan 31 07:55:11 compute-0 sudo[73109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:11 compute-0 python3[73111]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:11 compute-0 lvm[73114]: PV /dev/loop5 not used.
Jan 31 07:55:12 compute-0 lvm[73117]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:55:12 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 31 07:55:12 compute-0 lvm[73127]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 31 07:55:12 compute-0 lvm[73129]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:55:12 compute-0 lvm[73129]: VG ceph_vg2 finished
Jan 31 07:55:12 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 31 07:55:12 compute-0 sudo[73109]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:12 compute-0 sudo[73205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oepgdzozwenismskfxmkjejhmwsvytph ; /usr/bin/python3'
Jan 31 07:55:12 compute-0 sudo[73205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:12 compute-0 python3[73207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:55:12 compute-0 sudo[73205]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:12 compute-0 sudo[73278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhixrxgiicqbtpzparipetecypbmdkkj ; /usr/bin/python3'
Jan 31 07:55:12 compute-0 sudo[73278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:13 compute-0 python3[73280]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846112.4755392-36571-112874593674658/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:55:13 compute-0 sudo[73278]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:13 compute-0 sudo[73328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdqvxnidyqqxkwpfbmueugssvitikznz ; /usr/bin/python3'
Jan 31 07:55:13 compute-0 sudo[73328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:13 compute-0 python3[73330]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:55:13 compute-0 systemd[1]: Reloading.
Jan 31 07:55:14 compute-0 systemd-sysv-generator[73364]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:55:14 compute-0 systemd-rc-local-generator[73360]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:55:14 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 07:55:14 compute-0 bash[73370]: /dev/loop5: [64513]:4355919 (/var/lib/ceph-osd-2.img)
Jan 31 07:55:15 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 07:55:15 compute-0 sudo[73328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:15 compute-0 lvm[73372]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:55:15 compute-0 lvm[73372]: VG ceph_vg2 finished
Jan 31 07:55:17 compute-0 python3[73396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:55:19 compute-0 sudo[73487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaajnhpvpaftyegkhksxhsrfoxqxoyql ; /usr/bin/python3'
Jan 31 07:55:19 compute-0 sudo[73487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:19 compute-0 python3[73489]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:55:24 compute-0 sudo[73487]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:24 compute-0 sudo[73544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgcuxtcvldadwzshozzwgefzdisqowcg ; /usr/bin/python3'
Jan 31 07:55:24 compute-0 sudo[73544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:24 compute-0 python3[73546]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:55:28 compute-0 groupadd[73556]: group added to /etc/group: name=cephadm, GID=993
Jan 31 07:55:28 compute-0 groupadd[73556]: group added to /etc/gshadow: name=cephadm
Jan 31 07:55:28 compute-0 groupadd[73556]: new group: name=cephadm, GID=993
Jan 31 07:55:28 compute-0 useradd[73563]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 31 07:55:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:55:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:55:29 compute-0 sudo[73544]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:29 compute-0 sudo[73662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgpcyipgipkjhfxxfvmivffygdzrezew ; /usr/bin/python3'
Jan 31 07:55:29 compute-0 sudo[73662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:29 compute-0 python3[73664]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:29 compute-0 sudo[73662]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:29 compute-0 sudo[73690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiwaaawgkxxijqjdtdkivmrzxwvpbqrp ; /usr/bin/python3'
Jan 31 07:55:29 compute-0 sudo[73690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:29 compute-0 python3[73692]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:55:30 compute-0 sudo[73690]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:30 compute-0 sudo[73729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daofvplzxndpvwbxsvoogebuukiuhtkg ; /usr/bin/python3'
Jan 31 07:55:30 compute-0 sudo[73729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:30 compute-0 python3[73731]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:55:30 compute-0 sudo[73729]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:30 compute-0 sudo[73755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyaylbbwjkfmtmxeclrmjktrlgzwwrjn ; /usr/bin/python3'
Jan 31 07:55:30 compute-0 sudo[73755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:31 compute-0 python3[73757]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:55:31 compute-0 sudo[73755]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:31 compute-0 sudo[73833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciqhvdtooefvudtamvtbaqrlgofiaocj ; /usr/bin/python3'
Jan 31 07:55:31 compute-0 sudo[73833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:31 compute-0 python3[73835]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:55:31 compute-0 sudo[73833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:55:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:55:32 compute-0 systemd[1]: run-rce5b82f249434d50b18ea966aeb9712f.service: Deactivated successfully.
Jan 31 07:55:32 compute-0 sudo[73907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikecpfujmieyjmnwpfjsxuuctatzjkrt ; /usr/bin/python3'
Jan 31 07:55:32 compute-0 sudo[73907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:32 compute-0 python3[73909]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846131.542766-36720-239050562933717/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:55:32 compute-0 sudo[73907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:32 compute-0 sudo[74009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jawqegejmgshalrsctofbjevlexifxmt ; /usr/bin/python3'
Jan 31 07:55:32 compute-0 sudo[74009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:32 compute-0 python3[74011]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:55:32 compute-0 sudo[74009]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:33 compute-0 sudo[74082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhgdavkhsniinsjutukempubyjigcquk ; /usr/bin/python3'
Jan 31 07:55:33 compute-0 sudo[74082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:33 compute-0 python3[74084]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846132.6673021-36738-120923686248409/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:55:33 compute-0 sudo[74082]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:33 compute-0 sudo[74132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjzinveuqefranfdbamzdltlfkqjsgxk ; /usr/bin/python3'
Jan 31 07:55:33 compute-0 sudo[74132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:33 compute-0 python3[74134]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:33 compute-0 sudo[74132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:33 compute-0 sudo[74160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvdulkrfdrgqmnmrzmrpmgdkmyrfvbv ; /usr/bin/python3'
Jan 31 07:55:33 compute-0 sudo[74160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:33 compute-0 python3[74162]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:33 compute-0 sudo[74160]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:34 compute-0 sudo[74188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpfsaumdbzqgxadmykhvrmkwliizxsxu ; /usr/bin/python3'
Jan 31 07:55:34 compute-0 sudo[74188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:34 compute-0 python3[74190]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:34 compute-0 sudo[74188]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:34 compute-0 python3[74216]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:55:34 compute-0 sudo[74240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlpltjvrppflrfjqcrbnmvfatnthxrth ; /usr/bin/python3'
Jan 31 07:55:34 compute-0 sudo[74240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:55:34 compute-0 python3[74242]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid dc03f344-536f-5591-add9-31059f42637c --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:55:35 compute-0 sshd-session[74246]: Accepted publickey for ceph-admin from 192.168.122.100 port 42580 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:55:35 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 07:55:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 07:55:35 compute-0 systemd-logind[810]: New session 19 of user ceph-admin.
Jan 31 07:55:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 07:55:35 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 07:55:35 compute-0 systemd[74250]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:55:35 compute-0 systemd[74250]: Queued start job for default target Main User Target.
Jan 31 07:55:35 compute-0 systemd[74250]: Created slice User Application Slice.
Jan 31 07:55:35 compute-0 systemd[74250]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:55:35 compute-0 systemd[74250]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:55:35 compute-0 systemd[74250]: Reached target Paths.
Jan 31 07:55:35 compute-0 systemd[74250]: Reached target Timers.
Jan 31 07:55:35 compute-0 systemd[74250]: Starting D-Bus User Message Bus Socket...
Jan 31 07:55:35 compute-0 systemd[74250]: Starting Create User's Volatile Files and Directories...
Jan 31 07:55:35 compute-0 systemd[74250]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:55:35 compute-0 systemd[74250]: Reached target Sockets.
Jan 31 07:55:35 compute-0 systemd[74250]: Finished Create User's Volatile Files and Directories.
Jan 31 07:55:35 compute-0 systemd[74250]: Reached target Basic System.
Jan 31 07:55:35 compute-0 systemd[74250]: Reached target Main User Target.
Jan 31 07:55:35 compute-0 systemd[74250]: Startup finished in 136ms.
Jan 31 07:55:35 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 07:55:35 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 31 07:55:35 compute-0 sshd-session[74246]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:55:35 compute-0 sudo[74266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 31 07:55:35 compute-0 sudo[74266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:35 compute-0 sudo[74266]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:35 compute-0 sshd-session[74265]: Received disconnect from 192.168.122.100 port 42580:11: disconnected by user
Jan 31 07:55:35 compute-0 sshd-session[74265]: Disconnected from user ceph-admin 192.168.122.100 port 42580
Jan 31 07:55:35 compute-0 sshd-session[74246]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 31 07:55:35 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 07:55:35 compute-0 systemd-logind[810]: Session 19 logged out. Waiting for processes to exit.
Jan 31 07:55:35 compute-0 systemd-logind[810]: Removed session 19.
Jan 31 07:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:55:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat624488390-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 07:55:41 compute-0 sshd-session[74384]: Invalid user sol from 193.32.162.145 port 56612
Jan 31 07:55:41 compute-0 sshd-session[74384]: Connection closed by invalid user sol 193.32.162.145 port 56612 [preauth]
Jan 31 07:55:45 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 07:55:45 compute-0 systemd[74250]: Activating special unit Exit the Session...
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped target Main User Target.
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped target Basic System.
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped target Paths.
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped target Sockets.
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped target Timers.
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 07:55:45 compute-0 systemd[74250]: Closed D-Bus User Message Bus Socket.
Jan 31 07:55:45 compute-0 systemd[74250]: Stopped Create User's Volatile Files and Directories.
Jan 31 07:55:45 compute-0 systemd[74250]: Removed slice User Application Slice.
Jan 31 07:55:45 compute-0 systemd[74250]: Reached target Shutdown.
Jan 31 07:55:45 compute-0 systemd[74250]: Finished Exit the Session.
Jan 31 07:55:45 compute-0 systemd[74250]: Reached target Exit the Session.
Jan 31 07:55:45 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 07:55:45 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 07:55:45 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 07:55:45 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 07:55:45 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 07:55:45 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 07:55:45 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 07:56:17 compute-0 podman[74343]: 2026-01-31 07:56:17.133063654 +0000 UTC m=+41.414503412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:17 compute-0 podman[74407]: 2026-01-31 07:56:17.188554375 +0000 UTC m=+0.031036966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:17 compute-0 podman[74407]: 2026-01-31 07:56:17.319211956 +0000 UTC m=+0.161694557 container create e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db (image=quay.io/ceph/ceph:v20, name=jolly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 31 07:56:17 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 07:56:17 compute-0 systemd[1]: Started libpod-conmon-e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db.scope.
Jan 31 07:56:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:17 compute-0 podman[74407]: 2026-01-31 07:56:17.683324578 +0000 UTC m=+0.525807189 container init e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db (image=quay.io/ceph/ceph:v20, name=jolly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 07:56:17 compute-0 podman[74407]: 2026-01-31 07:56:17.689810393 +0000 UTC m=+0.532292994 container start e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db (image=quay.io/ceph/ceph:v20, name=jolly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 07:56:17 compute-0 podman[74407]: 2026-01-31 07:56:17.77013145 +0000 UTC m=+0.612614011 container attach e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db (image=quay.io/ceph/ceph:v20, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:56:17 compute-0 jolly_herschel[74423]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 07:56:17 compute-0 systemd[1]: libpod-e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db.scope: Deactivated successfully.
Jan 31 07:56:17 compute-0 conmon[74423]: conmon e5a97bc5c739fe1f7f3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db.scope/container/memory.events
Jan 31 07:56:17 compute-0 podman[74407]: 2026-01-31 07:56:17.80027519 +0000 UTC m=+0.642757791 container died e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db (image=quay.io/ceph/ceph:v20, name=jolly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 07:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-026bf026775b5f9d1aff057d75ecceb6b8ba57f7f98996292823a15a2f336c4b-merged.mount: Deactivated successfully.
Jan 31 07:56:18 compute-0 podman[74407]: 2026-01-31 07:56:18.084269131 +0000 UTC m=+0.926751692 container remove e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db (image=quay.io/ceph/ceph:v20, name=jolly_herschel, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:56:18 compute-0 systemd[1]: libpod-conmon-e5a97bc5c739fe1f7f3b9787d121cf70b9b010f7315734daaaf974354f55f1db.scope: Deactivated successfully.
Jan 31 07:56:18 compute-0 podman[74442]: 2026-01-31 07:56:18.118310066 +0000 UTC m=+0.020045920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:18 compute-0 podman[74442]: 2026-01-31 07:56:18.269437956 +0000 UTC m=+0.171173820 container create 02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec (image=quay.io/ceph/ceph:v20, name=stoic_yonath, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:18 compute-0 systemd[1]: Started libpod-conmon-02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec.scope.
Jan 31 07:56:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:18 compute-0 podman[74442]: 2026-01-31 07:56:18.406698574 +0000 UTC m=+0.308434398 container init 02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec (image=quay.io/ceph/ceph:v20, name=stoic_yonath, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:56:18 compute-0 podman[74442]: 2026-01-31 07:56:18.412222552 +0000 UTC m=+0.313958366 container start 02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec (image=quay.io/ceph/ceph:v20, name=stoic_yonath, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 07:56:18 compute-0 stoic_yonath[74458]: 167 167
Jan 31 07:56:18 compute-0 systemd[1]: libpod-02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec.scope: Deactivated successfully.
Jan 31 07:56:18 compute-0 podman[74442]: 2026-01-31 07:56:18.461891817 +0000 UTC m=+0.363627641 container attach 02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec (image=quay.io/ceph/ceph:v20, name=stoic_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 07:56:18 compute-0 podman[74442]: 2026-01-31 07:56:18.463063428 +0000 UTC m=+0.364799252 container died 02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec (image=quay.io/ceph/ceph:v20, name=stoic_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 07:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e3ec7b5076c88faa92c5dbfbb14c14bcd705d6abfc7ee2e15e65c7f33dd7933-merged.mount: Deactivated successfully.
Jan 31 07:56:18 compute-0 podman[74442]: 2026-01-31 07:56:18.867633469 +0000 UTC m=+0.769369293 container remove 02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec (image=quay.io/ceph/ceph:v20, name=stoic_yonath, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:18 compute-0 systemd[1]: libpod-conmon-02c72217999122143a157e71d0baf40f36ab0426b4c53ba4ea462d0e29bda2ec.scope: Deactivated successfully.
Jan 31 07:56:18 compute-0 podman[74475]: 2026-01-31 07:56:18.953366032 +0000 UTC m=+0.067251418 container create 644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016 (image=quay.io/ceph/ceph:v20, name=reverent_noether, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:18 compute-0 systemd[1]: Started libpod-conmon-644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016.scope.
Jan 31 07:56:19 compute-0 podman[74475]: 2026-01-31 07:56:18.912082663 +0000 UTC m=+0.025968069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:19 compute-0 podman[74475]: 2026-01-31 07:56:19.053116442 +0000 UTC m=+0.167001848 container init 644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016 (image=quay.io/ceph/ceph:v20, name=reverent_noether, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:56:19 compute-0 podman[74475]: 2026-01-31 07:56:19.057097489 +0000 UTC m=+0.170982835 container start 644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016 (image=quay.io/ceph/ceph:v20, name=reverent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 31 07:56:19 compute-0 reverent_noether[74491]: AQCjtX1p7ZFxBBAAQPdM1VEnbYl+jy4TsKOUJg==
Jan 31 07:56:19 compute-0 systemd[1]: libpod-644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016.scope: Deactivated successfully.
Jan 31 07:56:19 compute-0 podman[74475]: 2026-01-31 07:56:19.093831896 +0000 UTC m=+0.207717262 container attach 644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016 (image=quay.io/ceph/ceph:v20, name=reverent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:19 compute-0 podman[74475]: 2026-01-31 07:56:19.094208907 +0000 UTC m=+0.208094253 container died 644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016 (image=quay.io/ceph/ceph:v20, name=reverent_noether, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0e569d470b91139e6fb09424b39cc1fc6d2afd3ca21e38f50e3364adbd0f23d-merged.mount: Deactivated successfully.
Jan 31 07:56:19 compute-0 podman[74475]: 2026-01-31 07:56:19.2301817 +0000 UTC m=+0.344067046 container remove 644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016 (image=quay.io/ceph/ceph:v20, name=reverent_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:19 compute-0 systemd[1]: libpod-conmon-644e552e8f663fd9f70bd7e5ab31290384346bf5a69188061fb4c79d39cf8016.scope: Deactivated successfully.
Jan 31 07:56:19 compute-0 podman[74511]: 2026-01-31 07:56:19.282815173 +0000 UTC m=+0.030319995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:19 compute-0 podman[74511]: 2026-01-31 07:56:19.382856082 +0000 UTC m=+0.130360854 container create b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e (image=quay.io/ceph/ceph:v20, name=festive_cray, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:19 compute-0 systemd[1]: Started libpod-conmon-b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e.scope.
Jan 31 07:56:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:19 compute-0 podman[74511]: 2026-01-31 07:56:19.472704116 +0000 UTC m=+0.220208988 container init b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e (image=quay.io/ceph/ceph:v20, name=festive_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:19 compute-0 podman[74511]: 2026-01-31 07:56:19.479603091 +0000 UTC m=+0.227107863 container start b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e (image=quay.io/ceph/ceph:v20, name=festive_cray, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:19 compute-0 podman[74511]: 2026-01-31 07:56:19.491571032 +0000 UTC m=+0.239075844 container attach b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e (image=quay.io/ceph/ceph:v20, name=festive_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 07:56:19 compute-0 festive_cray[74528]: AQCjtX1pgv1aHhAA2s8/suJ/5aBe41SC+FL1Tw==
Jan 31 07:56:19 compute-0 systemd[1]: libpod-b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e.scope: Deactivated successfully.
Jan 31 07:56:19 compute-0 podman[74511]: 2026-01-31 07:56:19.512779823 +0000 UTC m=+0.260284635 container died b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e (image=quay.io/ceph/ceph:v20, name=festive_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:56:19 compute-0 podman[74511]: 2026-01-31 07:56:19.811730184 +0000 UTC m=+0.559234996 container remove b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e (image=quay.io/ceph/ceph:v20, name=festive_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:56:19 compute-0 podman[74547]: 2026-01-31 07:56:19.857258678 +0000 UTC m=+0.025906688 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:19 compute-0 podman[74547]: 2026-01-31 07:56:19.992786748 +0000 UTC m=+0.161434768 container create c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200 (image=quay.io/ceph/ceph:v20, name=vibrant_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:56:20 compute-0 systemd[1]: Started libpod-conmon-c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200.scope.
Jan 31 07:56:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:20 compute-0 systemd[1]: libpod-conmon-b1e11c78b2d3cbad1765348636f1bbacb26ed32c1e536a383e2f4002d270fb4e.scope: Deactivated successfully.
Jan 31 07:56:20 compute-0 podman[74547]: 2026-01-31 07:56:20.168636904 +0000 UTC m=+0.337284894 container init c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200 (image=quay.io/ceph/ceph:v20, name=vibrant_khayyam, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:20 compute-0 podman[74547]: 2026-01-31 07:56:20.174266595 +0000 UTC m=+0.342914585 container start c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200 (image=quay.io/ceph/ceph:v20, name=vibrant_khayyam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:56:20 compute-0 vibrant_khayyam[74564]: AQCktX1pur4gDBAAyJL9dl7NjEI/L0A6FtO2lA==
Jan 31 07:56:20 compute-0 systemd[1]: libpod-c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200.scope: Deactivated successfully.
Jan 31 07:56:20 compute-0 podman[74547]: 2026-01-31 07:56:20.211719441 +0000 UTC m=+0.380367511 container attach c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200 (image=quay.io/ceph/ceph:v20, name=vibrant_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:20 compute-0 podman[74547]: 2026-01-31 07:56:20.212407509 +0000 UTC m=+0.381055529 container died c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200 (image=quay.io/ceph/ceph:v20, name=vibrant_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-aab7ab0b30420809360a5bdae1d863f21e602149547b1e436ae6f8065e173ecb-merged.mount: Deactivated successfully.
Jan 31 07:56:20 compute-0 podman[74547]: 2026-01-31 07:56:20.60024014 +0000 UTC m=+0.768888170 container remove c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200 (image=quay.io/ceph/ceph:v20, name=vibrant_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:20 compute-0 podman[74584]: 2026-01-31 07:56:20.719083383 +0000 UTC m=+0.100907012 container create b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283 (image=quay.io/ceph/ceph:v20, name=youthful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:56:20 compute-0 podman[74584]: 2026-01-31 07:56:20.641289904 +0000 UTC m=+0.023113583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:20 compute-0 systemd[1]: Started libpod-conmon-b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283.scope.
Jan 31 07:56:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:20 compute-0 systemd[1]: libpod-conmon-c63e3f450ab4f8822f30720b168511fc79e4a413fcea6271bba6e0ebcd50c200.scope: Deactivated successfully.
Jan 31 07:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30122436bc6d38c2130f11d16e289b6ee9e32480afa6a12279edeaf24286033a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:20 compute-0 podman[74584]: 2026-01-31 07:56:20.926316761 +0000 UTC m=+0.308140460 container init b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283 (image=quay.io/ceph/ceph:v20, name=youthful_shannon, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:20 compute-0 podman[74584]: 2026-01-31 07:56:20.932432535 +0000 UTC m=+0.314256164 container start b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283 (image=quay.io/ceph/ceph:v20, name=youthful_shannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:20 compute-0 youthful_shannon[74600]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 07:56:20 compute-0 youthful_shannon[74600]: setting min_mon_release = tentacle
Jan 31 07:56:20 compute-0 youthful_shannon[74600]: /usr/bin/monmaptool: set fsid to dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:20 compute-0 youthful_shannon[74600]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 07:56:20 compute-0 systemd[1]: libpod-b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283.scope: Deactivated successfully.
Jan 31 07:56:20 compute-0 podman[74584]: 2026-01-31 07:56:20.977835726 +0000 UTC m=+0.359659435 container attach b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283 (image=quay.io/ceph/ceph:v20, name=youthful_shannon, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:56:20 compute-0 podman[74584]: 2026-01-31 07:56:20.97909849 +0000 UTC m=+0.360922139 container died b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283 (image=quay.io/ceph/ceph:v20, name=youthful_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-30122436bc6d38c2130f11d16e289b6ee9e32480afa6a12279edeaf24286033a-merged.mount: Deactivated successfully.
Jan 31 07:56:21 compute-0 podman[74584]: 2026-01-31 07:56:21.216549279 +0000 UTC m=+0.598372888 container remove b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283 (image=quay.io/ceph/ceph:v20, name=youthful_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 07:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:21 compute-0 systemd[1]: libpod-conmon-b99a637f728fc3cecbc1603e6a1e8fa636dd0782f9dda6bd7d058de58876e283.scope: Deactivated successfully.
Jan 31 07:56:21 compute-0 podman[74619]: 2026-01-31 07:56:21.313407232 +0000 UTC m=+0.077981616 container create 488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98 (image=quay.io/ceph/ceph:v20, name=goofy_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:56:21 compute-0 systemd[1]: Started libpod-conmon-488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98.scope.
Jan 31 07:56:21 compute-0 podman[74619]: 2026-01-31 07:56:21.269870672 +0000 UTC m=+0.034445126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e5346bd094d05ee268310ea704a6f8d5dd63660d354a6d288e79522f76dab7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e5346bd094d05ee268310ea704a6f8d5dd63660d354a6d288e79522f76dab7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e5346bd094d05ee268310ea704a6f8d5dd63660d354a6d288e79522f76dab7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e5346bd094d05ee268310ea704a6f8d5dd63660d354a6d288e79522f76dab7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:21 compute-0 podman[74619]: 2026-01-31 07:56:21.410520781 +0000 UTC m=+0.175095215 container init 488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98 (image=quay.io/ceph/ceph:v20, name=goofy_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:21 compute-0 podman[74619]: 2026-01-31 07:56:21.416879491 +0000 UTC m=+0.181453885 container start 488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98 (image=quay.io/ceph/ceph:v20, name=goofy_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:56:21 compute-0 podman[74619]: 2026-01-31 07:56:21.435589845 +0000 UTC m=+0.200164199 container attach 488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98 (image=quay.io/ceph/ceph:v20, name=goofy_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:21 compute-0 systemd[1]: libpod-488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98.scope: Deactivated successfully.
Jan 31 07:56:21 compute-0 podman[74619]: 2026-01-31 07:56:21.718917617 +0000 UTC m=+0.483492011 container died 488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98 (image=quay.io/ceph/ceph:v20, name=goofy_sammet, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 07:56:21 compute-0 podman[74619]: 2026-01-31 07:56:21.867965562 +0000 UTC m=+0.632539916 container remove 488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98 (image=quay.io/ceph/ceph:v20, name=goofy_sammet, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:21 compute-0 systemd[1]: libpod-conmon-488693348ac3d33f63376c230235f073ad1230e1f12800a957af2cc9f3c3bb98.scope: Deactivated successfully.
Jan 31 07:56:21 compute-0 systemd[1]: Reloading.
Jan 31 07:56:22 compute-0 systemd-rc-local-generator[74705]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:56:22 compute-0 systemd-sysv-generator[74709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:22 compute-0 systemd[1]: Reloading.
Jan 31 07:56:22 compute-0 systemd-rc-local-generator[74736]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:56:22 compute-0 systemd-sysv-generator[74742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:56:22 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 07:56:22 compute-0 systemd[1]: Reloading.
Jan 31 07:56:22 compute-0 systemd-rc-local-generator[74781]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:56:22 compute-0 systemd-sysv-generator[74784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:56:22 compute-0 systemd[1]: Reached target Ceph cluster dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:56:23 compute-0 systemd[1]: Reloading.
Jan 31 07:56:23 compute-0 systemd-sysv-generator[74822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:56:23 compute-0 systemd-rc-local-generator[74817]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:56:23 compute-0 systemd[1]: Reloading.
Jan 31 07:56:23 compute-0 systemd-sysv-generator[74861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:56:23 compute-0 systemd-rc-local-generator[74858]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:56:23 compute-0 systemd[1]: Created slice Slice /system/ceph-dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:56:23 compute-0 systemd[1]: Reached target System Time Set.
Jan 31 07:56:23 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 31 07:56:23 compute-0 systemd[1]: Starting Ceph mon.compute-0 for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:23 compute-0 podman[74917]: 2026-01-31 07:56:23.958639183 +0000 UTC m=+0.097488301 container create c31bfd7eb868e79fd26bbe4b7cace047ac3f5cbea6c216c13484b5d6aa81e1fa (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:56:23 compute-0 podman[74917]: 2026-01-31 07:56:23.877358799 +0000 UTC m=+0.016207927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b822a1fd5135f6c9de787bb9e0447c444b0fc94c44ec55b2c2a9ce2d9ed597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b822a1fd5135f6c9de787bb9e0447c444b0fc94c44ec55b2c2a9ce2d9ed597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b822a1fd5135f6c9de787bb9e0447c444b0fc94c44ec55b2c2a9ce2d9ed597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b822a1fd5135f6c9de787bb9e0447c444b0fc94c44ec55b2c2a9ce2d9ed597/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:24 compute-0 podman[74917]: 2026-01-31 07:56:24.073766756 +0000 UTC m=+0.212615874 container init c31bfd7eb868e79fd26bbe4b7cace047ac3f5cbea6c216c13484b5d6aa81e1fa (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:24 compute-0 podman[74917]: 2026-01-31 07:56:24.0776178 +0000 UTC m=+0.216466918 container start c31bfd7eb868e79fd26bbe4b7cace047ac3f5cbea6c216c13484b5d6aa81e1fa (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 07:56:24 compute-0 bash[74917]: c31bfd7eb868e79fd26bbe4b7cace047ac3f5cbea6c216c13484b5d6aa81e1fa
Jan 31 07:56:24 compute-0 systemd[1]: Started Ceph mon.compute-0 for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:56:24 compute-0 ceph-mon[74936]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: pidfile_write: ignore empty --pid-file
Jan 31 07:56:24 compute-0 ceph-mon[74936]: load: jerasure load: lrc 
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Git sha 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: DB SUMMARY
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: DB Session ID:  4UEZWJEKNOA8H30ZWMJF
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                                     Options.env: 0x55bc902d1440
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                                Options.info_log: 0x55bc913d9d60
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                                 Options.wal_dir: 
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                    Options.write_buffer_manager: 0x55bc913dc140
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                               Options.row_cache: None
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                              Options.wal_filter: None
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.wal_compression: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.max_background_jobs: 2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Compression algorithms supported:
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kZSTD supported: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:           Options.merge_operator: 
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:        Options.compaction_filter: None
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bc913d8cc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bc913cd8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:          Options.compression: NoCompression
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.num_levels: 7
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d3294ee4-27e2-4bb0-ad9a-134acd801483
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846184126431, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846184198733, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "4UEZWJEKNOA8H30ZWMJF", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846184198879, "job": 1, "event": "recovery_finished"}
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bc913fae00
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: DB pointer 0x55bc91546000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:56:24 compute-0 ceph-mon[74936]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.072       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bc913cd8d0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:56:24 compute-0 ceph-mon[74936]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@-1(???) e0 preinit fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 07:56:24 compute-0 ceph-mon[74936]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 07:56:24 compute-0 podman[74958]: 2026-01-31 07:56:24.357259152 +0000 UTC m=+0.112223666 container create 58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8 (image=quay.io/ceph/ceph:v20, name=dazzling_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:56:24 compute-0 podman[74958]: 2026-01-31 07:56:24.27416949 +0000 UTC m=+0.029134094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 07:56:24 compute-0 systemd[1]: Started libpod-conmon-58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8.scope.
Jan 31 07:56:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 07:56:24 compute-0 ceph-mon[74936]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 07:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e7425d89b947113b45513ce1ded33a65e52247b3f2c8e93a9aa090aa8da190/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e7425d89b947113b45513ce1ded33a65e52247b3f2c8e93a9aa090aa8da190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e7425d89b947113b45513ce1ded33a65e52247b3f2c8e93a9aa090aa8da190/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : created 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T07:56:21.473481Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,os=Linux}
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 31 07:56:24 compute-0 podman[74958]: 2026-01-31 07:56:24.525185474 +0000 UTC m=+0.280149978 container init 58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8 (image=quay.io/ceph/ceph:v20, name=dazzling_mestorf, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:24 compute-0 podman[74958]: 2026-01-31 07:56:24.530430815 +0000 UTC m=+0.285395319 container start 58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8 (image=quay.io/ceph/ceph:v20, name=dazzling_mestorf, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).mds e1 new map
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-31T07:56:24:478364+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 07:56:24 compute-0 podman[74958]: 2026-01-31 07:56:24.55704207 +0000 UTC m=+0.312006604 container attach 58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8 (image=quay.io/ceph/ceph:v20, name=dazzling_mestorf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mkfs dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 07:56:24 compute-0 ceph-mon[74936]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156340357' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:   cluster:
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     id:     dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     health: HEALTH_OK
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:  
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:   services:
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     mon: 1 daemons, quorum compute-0 (age 0.450226s) [leader: compute-0]
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     mgr: no daemons active
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     osd: 0 osds: 0 up, 0 in
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:  
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:   data:
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     pools:   0 pools, 0 pgs
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     objects: 0 objects, 0 B
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     usage:   0 B used, 0 B / 0 B avail
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:     pgs:     
Jan 31 07:56:24 compute-0 dazzling_mestorf[74991]:  
Jan 31 07:56:24 compute-0 systemd[1]: libpod-58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8.scope: Deactivated successfully.
Jan 31 07:56:24 compute-0 podman[74958]: 2026-01-31 07:56:24.930559836 +0000 UTC m=+0.685524340 container died 58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8 (image=quay.io/ceph/ceph:v20, name=dazzling_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e7425d89b947113b45513ce1ded33a65e52247b3f2c8e93a9aa090aa8da190-merged.mount: Deactivated successfully.
Jan 31 07:56:25 compute-0 podman[74958]: 2026-01-31 07:56:25.112477214 +0000 UTC m=+0.867441728 container remove 58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8 (image=quay.io/ceph/ceph:v20, name=dazzling_mestorf, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 07:56:25 compute-0 systemd[1]: libpod-conmon-58125487be03ba7a9b68703e69ebeab4fea9e83b2b1a27b2e950137a8e88a0d8.scope: Deactivated successfully.
Jan 31 07:56:25 compute-0 podman[75029]: 2026-01-31 07:56:25.245302653 +0000 UTC m=+0.118493075 container create d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036 (image=quay.io/ceph/ceph:v20, name=pedantic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:25 compute-0 podman[75029]: 2026-01-31 07:56:25.154943545 +0000 UTC m=+0.028133927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:25 compute-0 systemd[1]: Started libpod-conmon-d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036.scope.
Jan 31 07:56:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99a4532dbae804e2ef998ff8d09223d665350d452f93d55b7ca6bee81316634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99a4532dbae804e2ef998ff8d09223d665350d452f93d55b7ca6bee81316634/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99a4532dbae804e2ef998ff8d09223d665350d452f93d55b7ca6bee81316634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99a4532dbae804e2ef998ff8d09223d665350d452f93d55b7ca6bee81316634/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:25 compute-0 podman[75029]: 2026-01-31 07:56:25.406501353 +0000 UTC m=+0.279691765 container init d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036 (image=quay.io/ceph/ceph:v20, name=pedantic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:56:25 compute-0 podman[75029]: 2026-01-31 07:56:25.413292736 +0000 UTC m=+0.286483138 container start d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036 (image=quay.io/ceph/ceph:v20, name=pedantic_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 07:56:25 compute-0 podman[75029]: 2026-01-31 07:56:25.462553039 +0000 UTC m=+0.335743451 container attach d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036 (image=quay.io/ceph/ceph:v20, name=pedantic_beaver, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:25 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 07:56:25 compute-0 ceph-mon[74936]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/799491792' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 07:56:25 compute-0 ceph-mon[74936]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:56:25 compute-0 ceph-mon[74936]: monmap epoch 1
Jan 31 07:56:25 compute-0 ceph-mon[74936]: fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:25 compute-0 ceph-mon[74936]: last_changed 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:25 compute-0 ceph-mon[74936]: created 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:25 compute-0 ceph-mon[74936]: min_mon_release 20 (tentacle)
Jan 31 07:56:25 compute-0 ceph-mon[74936]: election_strategy: 1
Jan 31 07:56:25 compute-0 ceph-mon[74936]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 07:56:25 compute-0 ceph-mon[74936]: fsmap 
Jan 31 07:56:25 compute-0 ceph-mon[74936]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:56:25 compute-0 ceph-mon[74936]: mgrmap e1: no daemons active
Jan 31 07:56:25 compute-0 ceph-mon[74936]: from='client.? 192.168.122.100:0/2156340357' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 07:56:25 compute-0 ceph-mon[74936]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/799491792' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:56:25 compute-0 pedantic_beaver[75046]: 
Jan 31 07:56:25 compute-0 pedantic_beaver[75046]: [global]
Jan 31 07:56:25 compute-0 pedantic_beaver[75046]:         fsid = dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:25 compute-0 pedantic_beaver[75046]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 07:56:25 compute-0 pedantic_beaver[75046]:         osd_crush_chooseleaf_type = 0
Jan 31 07:56:25 compute-0 systemd[1]: libpod-d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036.scope: Deactivated successfully.
Jan 31 07:56:25 compute-0 podman[75029]: 2026-01-31 07:56:25.814400273 +0000 UTC m=+0.687590655 container died d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036 (image=quay.io/ceph/ceph:v20, name=pedantic_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f99a4532dbae804e2ef998ff8d09223d665350d452f93d55b7ca6bee81316634-merged.mount: Deactivated successfully.
Jan 31 07:56:25 compute-0 podman[75029]: 2026-01-31 07:56:25.967849936 +0000 UTC m=+0.841040318 container remove d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036 (image=quay.io/ceph/ceph:v20, name=pedantic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 07:56:25 compute-0 systemd[1]: libpod-conmon-d328d5d3f2b0ea32f28b5a33d94e6ed32fd1b287463a9d75d9f4dfeffab3c036.scope: Deactivated successfully.
Jan 31 07:56:26 compute-0 podman[75085]: 2026-01-31 07:56:26.029371859 +0000 UTC m=+0.047354824 container create 40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a (image=quay.io/ceph/ceph:v20, name=vigilant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 07:56:26 compute-0 podman[75085]: 2026-01-31 07:56:26.004254664 +0000 UTC m=+0.022237649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:26 compute-0 systemd[1]: Started libpod-conmon-40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a.scope.
Jan 31 07:56:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc3c63196c079d17dde31004427ae4449cb971bb8c7c295b7e7e7bff3ea2cb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc3c63196c079d17dde31004427ae4449cb971bb8c7c295b7e7e7bff3ea2cb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc3c63196c079d17dde31004427ae4449cb971bb8c7c295b7e7e7bff3ea2cb4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc3c63196c079d17dde31004427ae4449cb971bb8c7c295b7e7e7bff3ea2cb4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:26 compute-0 podman[75085]: 2026-01-31 07:56:26.224717697 +0000 UTC m=+0.242700692 container init 40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a (image=quay.io/ceph/ceph:v20, name=vigilant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 07:56:26 compute-0 podman[75085]: 2026-01-31 07:56:26.231195702 +0000 UTC m=+0.249178667 container start 40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a (image=quay.io/ceph/ceph:v20, name=vigilant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:56:26 compute-0 podman[75085]: 2026-01-31 07:56:26.301641114 +0000 UTC m=+0.319624119 container attach 40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a (image=quay.io/ceph/ceph:v20, name=vigilant_allen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 07:56:26 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:56:26 compute-0 ceph-mon[74936]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460867776' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:56:26 compute-0 systemd[1]: libpod-40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a.scope: Deactivated successfully.
Jan 31 07:56:26 compute-0 podman[75085]: 2026-01-31 07:56:26.451742357 +0000 UTC m=+0.469725382 container died 40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a (image=quay.io/ceph/ceph:v20, name=vigilant_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fc3c63196c079d17dde31004427ae4449cb971bb8c7c295b7e7e7bff3ea2cb4-merged.mount: Deactivated successfully.
Jan 31 07:56:26 compute-0 ceph-mon[74936]: from='client.? 192.168.122.100:0/799491792' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 07:56:26 compute-0 ceph-mon[74936]: from='client.? 192.168.122.100:0/799491792' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:56:26 compute-0 ceph-mon[74936]: from='client.? 192.168.122.100:0/460867776' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:56:26 compute-0 podman[75085]: 2026-01-31 07:56:26.857469307 +0000 UTC m=+0.875452272 container remove 40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a (image=quay.io/ceph/ceph:v20, name=vigilant_allen, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:56:26 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:56:26 compute-0 systemd[1]: libpod-conmon-40d6b8a012a69947d3b86ad88d6da2455a2951fb54ad53a3926f8a0bd324ae3a.scope: Deactivated successfully.
Jan 31 07:56:27 compute-0 ceph-mon[74936]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 07:56:27 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 07:56:27 compute-0 ceph-mon[74936]: mon.compute-0@0(leader) e1 shutdown
Jan 31 07:56:27 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0[74932]: 2026-01-31T07:56:27.029+0000 7f503123c640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 07:56:27 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0[74932]: 2026-01-31T07:56:27.029+0000 7f503123c640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 07:56:27 compute-0 ceph-mon[74936]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 07:56:27 compute-0 ceph-mon[74936]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 07:56:27 compute-0 podman[75169]: 2026-01-31 07:56:27.26625417 +0000 UTC m=+0.280579629 container died c31bfd7eb868e79fd26bbe4b7cace047ac3f5cbea6c216c13484b5d6aa81e1fa (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 07:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-74b822a1fd5135f6c9de787bb9e0447c444b0fc94c44ec55b2c2a9ce2d9ed597-merged.mount: Deactivated successfully.
Jan 31 07:56:27 compute-0 podman[75169]: 2026-01-31 07:56:27.551808213 +0000 UTC m=+0.566133652 container remove c31bfd7eb868e79fd26bbe4b7cace047ac3f5cbea6c216c13484b5d6aa81e1fa (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:56:27 compute-0 bash[75169]: ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0
Jan 31 07:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:56:27 compute-0 systemd[1]: ceph-dc03f344-536f-5591-add9-31059f42637c@mon.compute-0.service: Deactivated successfully.
Jan 31 07:56:27 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:56:27 compute-0 systemd[1]: Starting Ceph mon.compute-0 for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:56:27 compute-0 podman[75275]: 2026-01-31 07:56:27.946031855 +0000 UTC m=+0.066439077 container create 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:27 compute-0 podman[75275]: 2026-01-31 07:56:27.901002484 +0000 UTC m=+0.021409746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887131c85dd14cc828ef462cabef4a1d3a21bc83c0a192798d5534622b7fb0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887131c85dd14cc828ef462cabef4a1d3a21bc83c0a192798d5534622b7fb0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887131c85dd14cc828ef462cabef4a1d3a21bc83c0a192798d5534622b7fb0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887131c85dd14cc828ef462cabef4a1d3a21bc83c0a192798d5534622b7fb0b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:28 compute-0 podman[75275]: 2026-01-31 07:56:28.190863623 +0000 UTC m=+0.311270875 container init 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:28 compute-0 podman[75275]: 2026-01-31 07:56:28.198713574 +0000 UTC m=+0.319120796 container start 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:28 compute-0 bash[75275]: 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: pidfile_write: ignore empty --pid-file
Jan 31 07:56:28 compute-0 ceph-mon[75294]: load: jerasure load: lrc 
Jan 31 07:56:28 compute-0 systemd[1]: Started Ceph mon.compute-0 for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Git sha 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: DB SUMMARY
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: DB Session ID:  Y7RM6XVJX1JMWBYCK9C2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 61633 ; 
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                                     Options.env: 0x55cc8aa71440
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                                Options.info_log: 0x55cc8bf49e80
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                                 Options.wal_dir: 
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                    Options.write_buffer_manager: 0x55cc8bf94140
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                               Options.row_cache: None
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                              Options.wal_filter: None
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.wal_compression: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.max_background_jobs: 2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Compression algorithms supported:
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kZSTD supported: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:           Options.merge_operator: 
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:        Options.compaction_filter: None
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cc8bfa0a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55cc8bf858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:          Options.compression: NoCompression
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.num_levels: 7
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d3294ee4-27e2-4bb0-ad9a-134acd801483
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846188257224, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846188313526, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 61226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 150, "table_properties": {"data_size": 59685, "index_size": 183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3459, "raw_average_key_size": 30, "raw_value_size": 56988, "raw_average_value_size": 504, "num_data_blocks": 9, "num_entries": 113, "num_filter_entries": 113, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846188, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846188313800, "job": 1, "event": "recovery_finished"}
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 07:56:28 compute-0 podman[75316]: 2026-01-31 07:56:28.365928477 +0000 UTC m=+0.091193442 container create ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9 (image=quay.io/ceph/ceph:v20, name=pedantic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cc8bfb2e00
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: DB pointer 0x55cc8c0fc000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:56:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   61.69 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      2/0   61.69 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cc8bf858d0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:56:28 compute-0 ceph-mon[75294]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???) e1 preinit fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???).mds e1 new map
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-31T07:56:24:478364+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 07:56:28 compute-0 ceph-mon[75294]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 07:56:28 compute-0 podman[75316]: 2026-01-31 07:56:28.304941148 +0000 UTC m=+0.030206213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : created 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:56:28 compute-0 systemd[1]: Started libpod-conmon-ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9.scope.
Jan 31 07:56:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc03c778a742122cdba240267d841b00d42bd0c845d3d0ab21e09681ef62544/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc03c778a742122cdba240267d841b00d42bd0c845d3d0ab21e09681ef62544/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc03c778a742122cdba240267d841b00d42bd0c845d3d0ab21e09681ef62544/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 07:56:28 compute-0 podman[75316]: 2026-01-31 07:56:28.585366262 +0000 UTC m=+0.310631217 container init ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9 (image=quay.io/ceph/ceph:v20, name=pedantic_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:28 compute-0 podman[75316]: 2026-01-31 07:56:28.592883225 +0000 UTC m=+0.318148180 container start ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9 (image=quay.io/ceph/ceph:v20, name=pedantic_dubinsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: monmap epoch 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:28 compute-0 ceph-mon[75294]: last_changed 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: created 2026-01-31T07:56:20.975396+0000
Jan 31 07:56:28 compute-0 ceph-mon[75294]: min_mon_release 20 (tentacle)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: election_strategy: 1
Jan 31 07:56:28 compute-0 ceph-mon[75294]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 07:56:28 compute-0 ceph-mon[75294]: fsmap 
Jan 31 07:56:28 compute-0 ceph-mon[75294]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mgrmap e1: no daemons active
Jan 31 07:56:28 compute-0 podman[75316]: 2026-01-31 07:56:28.646466624 +0000 UTC m=+0.371731609 container attach ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9 (image=quay.io/ceph/ceph:v20, name=pedantic_dubinsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:56:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 31 07:56:28 compute-0 systemd[1]: libpod-ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9.scope: Deactivated successfully.
Jan 31 07:56:28 compute-0 podman[75316]: 2026-01-31 07:56:28.784557095 +0000 UTC m=+0.509822050 container died ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9 (image=quay.io/ceph/ceph:v20, name=pedantic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc03c778a742122cdba240267d841b00d42bd0c845d3d0ab21e09681ef62544-merged.mount: Deactivated successfully.
Jan 31 07:56:29 compute-0 podman[75316]: 2026-01-31 07:56:29.045969078 +0000 UTC m=+0.771234033 container remove ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9 (image=quay.io/ceph/ceph:v20, name=pedantic_dubinsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:56:29 compute-0 podman[75388]: 2026-01-31 07:56:29.176883355 +0000 UTC m=+0.113647514 container create c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f (image=quay.io/ceph/ceph:v20, name=blissful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:29 compute-0 podman[75388]: 2026-01-31 07:56:29.079904099 +0000 UTC m=+0.016668278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:29 compute-0 systemd[1]: Started libpod-conmon-c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f.scope.
Jan 31 07:56:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516227df7c5b04a675f29c8996d4e4f585889776d202bd7d133ac1385985df02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516227df7c5b04a675f29c8996d4e4f585889776d202bd7d133ac1385985df02/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516227df7c5b04a675f29c8996d4e4f585889776d202bd7d133ac1385985df02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:29 compute-0 systemd[1]: libpod-conmon-ca632316a3b33012f41d3dc96da8ab34bed33cc07e6ef89d10f8ae8eabe2e7f9.scope: Deactivated successfully.
Jan 31 07:56:29 compute-0 podman[75388]: 2026-01-31 07:56:29.286452359 +0000 UTC m=+0.223216578 container init c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f (image=quay.io/ceph/ceph:v20, name=blissful_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:29 compute-0 podman[75388]: 2026-01-31 07:56:29.291186076 +0000 UTC m=+0.227950235 container start c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f (image=quay.io/ceph/ceph:v20, name=blissful_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:56:29 compute-0 podman[75388]: 2026-01-31 07:56:29.415125876 +0000 UTC m=+0.351890045 container attach c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f (image=quay.io/ceph/ceph:v20, name=blissful_hoover, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:56:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 31 07:56:29 compute-0 systemd[1]: libpod-c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f.scope: Deactivated successfully.
Jan 31 07:56:29 compute-0 podman[75388]: 2026-01-31 07:56:29.495708191 +0000 UTC m=+0.432472370 container died c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f (image=quay.io/ceph/ceph:v20, name=blissful_hoover, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-516227df7c5b04a675f29c8996d4e4f585889776d202bd7d133ac1385985df02-merged.mount: Deactivated successfully.
Jan 31 07:56:29 compute-0 podman[75388]: 2026-01-31 07:56:29.8265485 +0000 UTC m=+0.763312659 container remove c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f (image=quay.io/ceph/ceph:v20, name=blissful_hoover, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 07:56:29 compute-0 systemd[1]: libpod-conmon-c57b21eaa7bf0e1c75df0db107b08deebd362b2c70848020ddaa699c6910d63f.scope: Deactivated successfully.
Jan 31 07:56:30 compute-0 systemd[1]: Reloading.
Jan 31 07:56:30 compute-0 systemd-sysv-generator[75474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:56:30 compute-0 systemd-rc-local-generator[75469]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:56:30 compute-0 systemd[1]: Reloading.
Jan 31 07:56:30 compute-0 systemd-sysv-generator[75515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:56:30 compute-0 systemd-rc-local-generator[75512]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:56:30 compute-0 systemd[1]: Starting Ceph mgr.compute-0.lhuavc for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:56:31 compute-0 podman[75570]: 2026-01-31 07:56:31.010588282 +0000 UTC m=+0.072467657 container create 81f4bb2dc444c8c93ef78a0cb274e5fb814f70ab702cced801be3829e82a316e (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 31 07:56:31 compute-0 podman[75570]: 2026-01-31 07:56:30.959599182 +0000 UTC m=+0.021478607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d961b1abf1f8f1dd9c6b26d3a0aa0bde97ac3c7165ea5b924901012cab1305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d961b1abf1f8f1dd9c6b26d3a0aa0bde97ac3c7165ea5b924901012cab1305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d961b1abf1f8f1dd9c6b26d3a0aa0bde97ac3c7165ea5b924901012cab1305/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d961b1abf1f8f1dd9c6b26d3a0aa0bde97ac3c7165ea5b924901012cab1305/merged/var/lib/ceph/mgr/ceph-compute-0.lhuavc supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:31 compute-0 podman[75570]: 2026-01-31 07:56:31.174094405 +0000 UTC m=+0.235973880 container init 81f4bb2dc444c8c93ef78a0cb274e5fb814f70ab702cced801be3829e82a316e (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 07:56:31 compute-0 podman[75570]: 2026-01-31 07:56:31.181540705 +0000 UTC m=+0.243420120 container start 81f4bb2dc444c8c93ef78a0cb274e5fb814f70ab702cced801be3829e82a316e (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:31 compute-0 ceph-mgr[75591]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:56:31 compute-0 ceph-mgr[75591]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 07:56:31 compute-0 ceph-mgr[75591]: pidfile_write: ignore empty --pid-file
Jan 31 07:56:31 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'alerts'
Jan 31 07:56:31 compute-0 bash[75570]: 81f4bb2dc444c8c93ef78a0cb274e5fb814f70ab702cced801be3829e82a316e
Jan 31 07:56:31 compute-0 systemd[1]: Started Ceph mgr.compute-0.lhuavc for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:56:31 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'balancer'
Jan 31 07:56:31 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'cephadm'
Jan 31 07:56:31 compute-0 podman[75612]: 2026-01-31 07:56:31.415166462 +0000 UTC m=+0.034535598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:31 compute-0 podman[75612]: 2026-01-31 07:56:31.569226491 +0000 UTC m=+0.188595537 container create d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 07:56:31 compute-0 systemd[1]: Started libpod-conmon-d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d.scope.
Jan 31 07:56:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c67f105599fe130bc611fd6e706c390f63b60f473900b98af56972c6b8e082/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c67f105599fe130bc611fd6e706c390f63b60f473900b98af56972c6b8e082/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c67f105599fe130bc611fd6e706c390f63b60f473900b98af56972c6b8e082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:31 compute-0 podman[75612]: 2026-01-31 07:56:31.833300577 +0000 UTC m=+0.452669653 container init d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:56:31 compute-0 podman[75612]: 2026-01-31 07:56:31.8404932 +0000 UTC m=+0.459862266 container start d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 07:56:31 compute-0 podman[75612]: 2026-01-31 07:56:31.973900644 +0000 UTC m=+0.593269730 container attach d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:56:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 07:56:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3839484897' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]: 
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]: {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "health": {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "status": "HEALTH_OK",
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "checks": {},
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "mutes": []
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     },
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "election_epoch": 5,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "quorum": [
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         0
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     ],
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "quorum_names": [
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "compute-0"
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     ],
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "quorum_age": 3,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "monmap": {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "epoch": 1,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "min_mon_release_name": "tentacle",
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_mons": 1
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     },
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "osdmap": {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "epoch": 1,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_osds": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_up_osds": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "osd_up_since": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_in_osds": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "osd_in_since": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_remapped_pgs": 0
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     },
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "pgmap": {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "pgs_by_state": [],
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_pgs": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_pools": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_objects": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "data_bytes": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "bytes_used": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "bytes_avail": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "bytes_total": 0
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     },
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "fsmap": {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "epoch": 1,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "btime": "2026-01-31T07:56:24:478364+0000",
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "by_rank": [],
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "up:standby": 0
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     },
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "mgrmap": {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "available": false,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "num_standbys": 0,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "modules": [
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:             "iostat",
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:             "nfs"
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         ],
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "services": {}
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     },
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "servicemap": {
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "epoch": 1,
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "modified": "2026-01-31T07:56:24.518276+0000",
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:         "services": {}
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     },
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]:     "progress_events": {}
Jan 31 07:56:32 compute-0 tender_bhaskara[75629]: }
Jan 31 07:56:32 compute-0 systemd[1]: libpod-d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d.scope: Deactivated successfully.
Jan 31 07:56:32 compute-0 podman[75612]: 2026-01-31 07:56:32.036730912 +0000 UTC m=+0.656100018 container died d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 07:56:32 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'crash'
Jan 31 07:56:32 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3839484897' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:32 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'dashboard'
Jan 31 07:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c67f105599fe130bc611fd6e706c390f63b60f473900b98af56972c6b8e082-merged.mount: Deactivated successfully.
Jan 31 07:56:32 compute-0 podman[75612]: 2026-01-31 07:56:32.480901377 +0000 UTC m=+1.100270423 container remove d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d (image=quay.io/ceph/ceph:v20, name=tender_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:56:32 compute-0 systemd[1]: libpod-conmon-d3ed9bb7401766b3c2080cbee51354ced68700adc57bbddb912125035e10391d.scope: Deactivated successfully.
Jan 31 07:56:32 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'devicehealth'
Jan 31 07:56:32 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 07:56:33 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 07:56:33 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 07:56:33 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]:   from numpy import show_config as show_numpy_config
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'influx'
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'insights'
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'iostat'
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'k8sevents'
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'localpool'
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'mirroring'
Jan 31 07:56:33 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'nfs'
Jan 31 07:56:34 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'orchestrator'
Jan 31 07:56:34 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 07:56:34 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'osd_support'
Jan 31 07:56:34 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 07:56:34 compute-0 podman[75680]: 2026-01-31 07:56:34.587349772 +0000 UTC m=+0.079228670 container create 6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e (image=quay.io/ceph/ceph:v20, name=frosty_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 31 07:56:34 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'progress'
Jan 31 07:56:34 compute-0 podman[75680]: 2026-01-31 07:56:34.539400654 +0000 UTC m=+0.031279552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:34 compute-0 systemd[1]: Started libpod-conmon-6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e.scope.
Jan 31 07:56:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab731939f02e0b4932be8ab8fe26c88a5d7a3c5f4751319deba7f8f7fd931cd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab731939f02e0b4932be8ab8fe26c88a5d7a3c5f4751319deba7f8f7fd931cd9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab731939f02e0b4932be8ab8fe26c88a5d7a3c5f4751319deba7f8f7fd931cd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:34 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'prometheus'
Jan 31 07:56:34 compute-0 podman[75680]: 2026-01-31 07:56:34.708535178 +0000 UTC m=+0.200414056 container init 6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e (image=quay.io/ceph/ceph:v20, name=frosty_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:56:34 compute-0 podman[75680]: 2026-01-31 07:56:34.71308416 +0000 UTC m=+0.204963038 container start 6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e (image=quay.io/ceph/ceph:v20, name=frosty_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 07:56:34 compute-0 podman[75680]: 2026-01-31 07:56:34.770259807 +0000 UTC m=+0.262138675 container attach 6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e (image=quay.io/ceph/ceph:v20, name=frosty_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 07:56:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 07:56:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247227127' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]: 
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]: {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "health": {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "status": "HEALTH_OK",
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "checks": {},
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "mutes": []
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     },
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "election_epoch": 5,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "quorum": [
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         0
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     ],
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "quorum_names": [
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "compute-0"
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     ],
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "quorum_age": 6,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "monmap": {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "epoch": 1,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "min_mon_release_name": "tentacle",
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_mons": 1
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     },
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "osdmap": {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "epoch": 1,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_osds": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_up_osds": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "osd_up_since": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_in_osds": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "osd_in_since": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_remapped_pgs": 0
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     },
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "pgmap": {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "pgs_by_state": [],
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_pgs": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_pools": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_objects": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "data_bytes": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "bytes_used": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "bytes_avail": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "bytes_total": 0
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     },
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "fsmap": {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "epoch": 1,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "btime": "2026-01-31T07:56:24:478364+0000",
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "by_rank": [],
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "up:standby": 0
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     },
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "mgrmap": {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "available": false,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "num_standbys": 0,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "modules": [
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:             "iostat",
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:             "nfs"
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         ],
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "services": {}
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     },
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "servicemap": {
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "epoch": 1,
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "modified": "2026-01-31T07:56:24.518276+0000",
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:         "services": {}
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     },
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]:     "progress_events": {}
Jan 31 07:56:34 compute-0 frosty_chaplygin[75696]: }
Jan 31 07:56:34 compute-0 systemd[1]: libpod-6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e.scope: Deactivated successfully.
Jan 31 07:56:34 compute-0 podman[75680]: 2026-01-31 07:56:34.9200096 +0000 UTC m=+0.411888488 container died 6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e (image=quay.io/ceph/ceph:v20, name=frosty_chaplygin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 07:56:35 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'rbd_support'
Jan 31 07:56:35 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2247227127' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:35 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'rgw'
Jan 31 07:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab731939f02e0b4932be8ab8fe26c88a5d7a3c5f4751319deba7f8f7fd931cd9-merged.mount: Deactivated successfully.
Jan 31 07:56:35 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'rook'
Jan 31 07:56:35 compute-0 podman[75680]: 2026-01-31 07:56:35.434707818 +0000 UTC m=+0.926586736 container remove 6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e (image=quay.io/ceph/ceph:v20, name=frosty_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:56:35 compute-0 systemd[1]: libpod-conmon-6a17bb631f2ee0aeb939741eaa01d1fd7991f135afa7a42c555ae780642ece3e.scope: Deactivated successfully.
Jan 31 07:56:35 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'selftest'
Jan 31 07:56:35 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'smb'
Jan 31 07:56:36 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'snap_schedule'
Jan 31 07:56:36 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'stats'
Jan 31 07:56:36 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'status'
Jan 31 07:56:36 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'telegraf'
Jan 31 07:56:36 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'telemetry'
Jan 31 07:56:36 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'volumes'
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: ms_deliver_dispatch: unhandled message 0x55f8ae3fd860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lhuavc
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr handle_mgr_map Activating!
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.lhuavc(active, starting, since 0.181322s)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr handle_mgr_map I am now activating
Jan 31 07:56:37 compute-0 podman[75734]: 2026-01-31 07:56:37.478639434 +0000 UTC m=+0.023563904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.lhuavc", "id": "compute-0.lhuavc"} v 0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr metadata", "who": "compute-0.lhuavc", "id": "compute-0.lhuavc"} : dispatch
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: balancer
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: crash
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Manager daemon compute-0.lhuavc is now available
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [balancer INFO root] Starting
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: devicehealth
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_07:56:37
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [balancer INFO root] No pools available
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Starting
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: iostat
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: nfs
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: orchestrator
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: pg_autoscaler
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: progress
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [progress INFO root] Loading...
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [progress INFO root] No stored events to load
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [progress INFO root] Loaded [] historic events
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:56:37 compute-0 podman[75734]: 2026-01-31 07:56:37.597916819 +0000 UTC m=+0.142841259 container create 3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626 (image=quay.io/ceph/ceph:v20, name=festive_vaughan, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] recovery thread starting
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] starting setup
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: rbd_support
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: status
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/mirror_snapshot_schedule"} v 0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/mirror_snapshot_schedule"} : dispatch
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: telemetry
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] PerfHandler: starting
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TaskHandler: starting
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/trash_purge_schedule"} v 0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/trash_purge_schedule"} : dispatch
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: volumes
Jan 31 07:56:37 compute-0 ceph-mon[75294]: Activating manager daemon compute-0.lhuavc
Jan 31 07:56:37 compute-0 systemd[1]: Started libpod-conmon-3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626.scope.
Jan 31 07:56:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7cb31dde947839deaef7c8a3e8e46467ccaac934545220c5974ac005de059d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7cb31dde947839deaef7c8a3e8e46467ccaac934545220c5974ac005de059d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7cb31dde947839deaef7c8a3e8e46467ccaac934545220c5974ac005de059d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 07:56:37 compute-0 ceph-mgr[75591]: [rbd_support INFO root] setup complete
Jan 31 07:56:37 compute-0 podman[75734]: 2026-01-31 07:56:37.836087998 +0000 UTC m=+0.381012468 container init 3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626 (image=quay.io/ceph/ceph:v20, name=festive_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:56:37 compute-0 podman[75734]: 2026-01-31 07:56:37.840606269 +0000 UTC m=+0.385530709 container start 3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626 (image=quay.io/ceph/ceph:v20, name=festive_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 31 07:56:37 compute-0 podman[75734]: 2026-01-31 07:56:37.859448816 +0000 UTC m=+0.404373276 container attach 3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626 (image=quay.io/ceph/ceph:v20, name=festive_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:56:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 07:56:38 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411748503' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:38 compute-0 festive_vaughan[75829]: 
Jan 31 07:56:38 compute-0 festive_vaughan[75829]: {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "health": {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "status": "HEALTH_OK",
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "checks": {},
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "mutes": []
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     },
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "election_epoch": 5,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "quorum": [
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         0
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     ],
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "quorum_names": [
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "compute-0"
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     ],
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "quorum_age": 9,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "monmap": {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "epoch": 1,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "min_mon_release_name": "tentacle",
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_mons": 1
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     },
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "osdmap": {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "epoch": 1,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_osds": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_up_osds": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "osd_up_since": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_in_osds": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "osd_in_since": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_remapped_pgs": 0
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     },
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "pgmap": {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "pgs_by_state": [],
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_pgs": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_pools": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_objects": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "data_bytes": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "bytes_used": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "bytes_avail": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "bytes_total": 0
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     },
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "fsmap": {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "epoch": 1,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "btime": "2026-01-31T07:56:24:478364+0000",
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "by_rank": [],
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "up:standby": 0
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     },
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "mgrmap": {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "available": false,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "num_standbys": 0,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "modules": [
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:             "iostat",
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:             "nfs"
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         ],
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "services": {}
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     },
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "servicemap": {
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "epoch": 1,
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "modified": "2026-01-31T07:56:24.518276+0000",
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:         "services": {}
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     },
Jan 31 07:56:38 compute-0 festive_vaughan[75829]:     "progress_events": {}
Jan 31 07:56:38 compute-0 festive_vaughan[75829]: }
Jan 31 07:56:38 compute-0 systemd[1]: libpod-3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626.scope: Deactivated successfully.
Jan 31 07:56:38 compute-0 podman[75734]: 2026-01-31 07:56:38.050585501 +0000 UTC m=+0.595509941 container died 3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626 (image=quay.io/ceph/ceph:v20, name=festive_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d7cb31dde947839deaef7c8a3e8e46467ccaac934545220c5974ac005de059d-merged.mount: Deactivated successfully.
Jan 31 07:56:38 compute-0 podman[75734]: 2026-01-31 07:56:38.289393277 +0000 UTC m=+0.834317717 container remove 3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626 (image=quay.io/ceph/ceph:v20, name=festive_vaughan, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:56:38 compute-0 systemd[1]: libpod-conmon-3768eabf52968e8211bdbb7b7b7edcf9bf204b765158cfded901eb267951b626.scope: Deactivated successfully.
Jan 31 07:56:38 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.lhuavc(active, since 1.32703s)
Jan 31 07:56:38 compute-0 ceph-mon[75294]: mgrmap e2: compute-0.lhuavc(active, starting, since 0.181322s)
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr metadata", "who": "compute-0.lhuavc", "id": "compute-0.lhuavc"} : dispatch
Jan 31 07:56:38 compute-0 ceph-mon[75294]: Manager daemon compute-0.lhuavc is now available
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/mirror_snapshot_schedule"} : dispatch
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/trash_purge_schedule"} : dispatch
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='mgr.14102 192.168.122.100:0/851903230' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:38 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1411748503' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:39 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:56:39 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:56:39 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.lhuavc(active, since 2s)
Jan 31 07:56:39 compute-0 ceph-mon[75294]: mgrmap e3: compute-0.lhuavc(active, since 1.32703s)
Jan 31 07:56:40 compute-0 podman[75868]: 2026-01-31 07:56:40.42851508 +0000 UTC m=+0.117435176 container create ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3 (image=quay.io/ceph/ceph:v20, name=heuristic_ramanujan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 07:56:40 compute-0 podman[75868]: 2026-01-31 07:56:40.340918177 +0000 UTC m=+0.029838303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:40 compute-0 systemd[1]: Started libpod-conmon-ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3.scope.
Jan 31 07:56:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d7412068c4c2bac6d56d8d5f822f1ee3a69f705bd8566fb6b68e1edd23bd09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d7412068c4c2bac6d56d8d5f822f1ee3a69f705bd8566fb6b68e1edd23bd09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d7412068c4c2bac6d56d8d5f822f1ee3a69f705bd8566fb6b68e1edd23bd09/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:40 compute-0 podman[75868]: 2026-01-31 07:56:40.624981279 +0000 UTC m=+0.313901475 container init ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3 (image=quay.io/ceph/ceph:v20, name=heuristic_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:56:40 compute-0 podman[75868]: 2026-01-31 07:56:40.62950858 +0000 UTC m=+0.318428676 container start ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3 (image=quay.io/ceph/ceph:v20, name=heuristic_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:56:40 compute-0 podman[75868]: 2026-01-31 07:56:40.668251321 +0000 UTC m=+0.357171437 container attach ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3 (image=quay.io/ceph/ceph:v20, name=heuristic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:56:41 compute-0 ceph-mon[75294]: mgrmap e4: compute-0.lhuavc(active, since 2s)
Jan 31 07:56:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 07:56:41 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074294282' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]: 
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]: {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "health": {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "status": "HEALTH_OK",
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "checks": {},
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "mutes": []
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     },
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "election_epoch": 5,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "quorum": [
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         0
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     ],
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "quorum_names": [
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "compute-0"
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     ],
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "quorum_age": 12,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "monmap": {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "epoch": 1,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "min_mon_release_name": "tentacle",
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_mons": 1
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     },
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "osdmap": {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "epoch": 1,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_osds": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_up_osds": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "osd_up_since": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_in_osds": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "osd_in_since": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_remapped_pgs": 0
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     },
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "pgmap": {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "pgs_by_state": [],
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_pgs": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_pools": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_objects": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "data_bytes": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "bytes_used": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "bytes_avail": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "bytes_total": 0
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     },
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "fsmap": {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "epoch": 1,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "btime": "2026-01-31T07:56:24:478364+0000",
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "by_rank": [],
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "up:standby": 0
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     },
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "mgrmap": {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "available": true,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "num_standbys": 0,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "modules": [
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:             "iostat",
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:             "nfs"
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         ],
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "services": {}
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     },
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "servicemap": {
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "epoch": 1,
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "modified": "2026-01-31T07:56:24.518276+0000",
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:         "services": {}
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     },
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]:     "progress_events": {}
Jan 31 07:56:41 compute-0 heuristic_ramanujan[75884]: }
Jan 31 07:56:41 compute-0 systemd[1]: libpod-ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3.scope: Deactivated successfully.
Jan 31 07:56:41 compute-0 podman[75868]: 2026-01-31 07:56:41.160191758 +0000 UTC m=+0.849111854 container died ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3 (image=quay.io/ceph/ceph:v20, name=heuristic_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0d7412068c4c2bac6d56d8d5f822f1ee3a69f705bd8566fb6b68e1edd23bd09-merged.mount: Deactivated successfully.
Jan 31 07:56:41 compute-0 podman[75868]: 2026-01-31 07:56:41.40996435 +0000 UTC m=+1.098884456 container remove ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3 (image=quay.io/ceph/ceph:v20, name=heuristic_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 07:56:41 compute-0 systemd[1]: libpod-conmon-ee5ee90322c7c5ea9361227997cb28699f53b12edf1b1e89a27eb8e66504c0a3.scope: Deactivated successfully.
Jan 31 07:56:41 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:56:41 compute-0 podman[75922]: 2026-01-31 07:56:41.518242029 +0000 UTC m=+0.091282524 container create dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d (image=quay.io/ceph/ceph:v20, name=adoring_golick, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 07:56:41 compute-0 podman[75922]: 2026-01-31 07:56:41.445212477 +0000 UTC m=+0.018252952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:41 compute-0 systemd[1]: Started libpod-conmon-dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d.scope.
Jan 31 07:56:41 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:56:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598fdb990947a5a82de7103ff04e01f2ce9b5103e7aff26c07832bcc34f4b7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598fdb990947a5a82de7103ff04e01f2ce9b5103e7aff26c07832bcc34f4b7d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598fdb990947a5a82de7103ff04e01f2ce9b5103e7aff26c07832bcc34f4b7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598fdb990947a5a82de7103ff04e01f2ce9b5103e7aff26c07832bcc34f4b7d/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:41 compute-0 podman[75922]: 2026-01-31 07:56:41.639587099 +0000 UTC m=+0.212627604 container init dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d (image=quay.io/ceph/ceph:v20, name=adoring_golick, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:56:41 compute-0 podman[75922]: 2026-01-31 07:56:41.644602473 +0000 UTC m=+0.217642928 container start dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d (image=quay.io/ceph/ceph:v20, name=adoring_golick, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:41 compute-0 podman[75922]: 2026-01-31 07:56:41.810685206 +0000 UTC m=+0.383725691 container attach dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d (image=quay.io/ceph/ceph:v20, name=adoring_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 07:56:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 07:56:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1208357619' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 07:56:42 compute-0 adoring_golick[75938]: 
Jan 31 07:56:42 compute-0 adoring_golick[75938]: [global]
Jan 31 07:56:42 compute-0 adoring_golick[75938]:         fsid = dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:56:42 compute-0 adoring_golick[75938]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 07:56:42 compute-0 adoring_golick[75938]:         osd_crush_chooseleaf_type = 0
Jan 31 07:56:42 compute-0 systemd[1]: libpod-dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d.scope: Deactivated successfully.
Jan 31 07:56:42 compute-0 podman[75922]: 2026-01-31 07:56:42.069044107 +0000 UTC m=+0.642084572 container died dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d (image=quay.io/ceph/ceph:v20, name=adoring_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 07:56:42 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2074294282' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 07:56:42 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1208357619' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 07:56:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0598fdb990947a5a82de7103ff04e01f2ce9b5103e7aff26c07832bcc34f4b7d-merged.mount: Deactivated successfully.
Jan 31 07:56:42 compute-0 podman[75922]: 2026-01-31 07:56:42.554034968 +0000 UTC m=+1.127075423 container remove dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d (image=quay.io/ceph/ceph:v20, name=adoring_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:42 compute-0 podman[75977]: 2026-01-31 07:56:42.582754009 +0000 UTC m=+0.016618487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:42 compute-0 podman[75977]: 2026-01-31 07:56:42.72193777 +0000 UTC m=+0.155802228 container create 1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4 (image=quay.io/ceph/ceph:v20, name=boring_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:42 compute-0 systemd[1]: Started libpod-conmon-1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4.scope.
Jan 31 07:56:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761390a0e48645643f99a93bcf616057c981e2d30247a11967c76de40a907b3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761390a0e48645643f99a93bcf616057c981e2d30247a11967c76de40a907b3f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761390a0e48645643f99a93bcf616057c981e2d30247a11967c76de40a907b3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:42 compute-0 podman[75977]: 2026-01-31 07:56:42.843753332 +0000 UTC m=+0.277617880 container init 1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4 (image=quay.io/ceph/ceph:v20, name=boring_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:42 compute-0 podman[75977]: 2026-01-31 07:56:42.848724795 +0000 UTC m=+0.282589283 container start 1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4 (image=quay.io/ceph/ceph:v20, name=boring_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:56:42 compute-0 podman[75977]: 2026-01-31 07:56:42.880373406 +0000 UTC m=+0.314237964 container attach 1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4 (image=quay.io/ceph/ceph:v20, name=boring_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:42 compute-0 systemd[1]: libpod-conmon-dface938b0b906b5ea0164049b53677b813c22f55ea1550a8669734934d8786d.scope: Deactivated successfully.
Jan 31 07:56:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 31 07:56:43 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1088370187' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:56:43 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1088370187' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  1: '-n'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  2: 'mgr.compute-0.lhuavc'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  3: '-f'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  4: '--setuser'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  5: 'ceph'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  6: '--setgroup'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  7: 'ceph'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr respawn  exe_path /proc/self/exe
Jan 31 07:56:43 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.lhuavc(active, since 6s)
Jan 31 07:56:43 compute-0 systemd[1]: libpod-1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4.scope: Deactivated successfully.
Jan 31 07:56:43 compute-0 podman[75977]: 2026-01-31 07:56:43.868367712 +0000 UTC m=+1.302232200 container died 1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4 (image=quay.io/ceph/ceph:v20, name=boring_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 07:56:43 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: ignoring --setuser ceph since I am not root
Jan 31 07:56:43 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: ignoring --setgroup ceph since I am not root
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: pidfile_write: ignore empty --pid-file
Jan 31 07:56:43 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'alerts'
Jan 31 07:56:44 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1088370187' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 07:56:44 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'balancer'
Jan 31 07:56:44 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'cephadm'
Jan 31 07:56:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-761390a0e48645643f99a93bcf616057c981e2d30247a11967c76de40a907b3f-merged.mount: Deactivated successfully.
Jan 31 07:56:44 compute-0 podman[75977]: 2026-01-31 07:56:44.901596033 +0000 UTC m=+2.335460491 container remove 1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4 (image=quay.io/ceph/ceph:v20, name=boring_cohen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:44 compute-0 systemd[1]: libpod-conmon-1cce04a9fa208de855583979ee329385628c2497fa876443a5b85569a3e842a4.scope: Deactivated successfully.
Jan 31 07:56:44 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'crash'
Jan 31 07:56:45 compute-0 podman[76062]: 2026-01-31 07:56:44.946462668 +0000 UTC m=+0.026833782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:45 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'dashboard'
Jan 31 07:56:45 compute-0 podman[76062]: 2026-01-31 07:56:45.109923709 +0000 UTC m=+0.190294843 container create 2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949 (image=quay.io/ceph/ceph:v20, name=sharp_panini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:45 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1088370187' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 07:56:45 compute-0 ceph-mon[75294]: mgrmap e5: compute-0.lhuavc(active, since 6s)
Jan 31 07:56:45 compute-0 systemd[1]: Started libpod-conmon-2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949.scope.
Jan 31 07:56:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b26084e89e9c73455fd2aeeb2f5a1ca925b325b5a7fca0400678da3f4efaa5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b26084e89e9c73455fd2aeeb2f5a1ca925b325b5a7fca0400678da3f4efaa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b26084e89e9c73455fd2aeeb2f5a1ca925b325b5a7fca0400678da3f4efaa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:45 compute-0 podman[76062]: 2026-01-31 07:56:45.421683886 +0000 UTC m=+0.502055010 container init 2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949 (image=quay.io/ceph/ceph:v20, name=sharp_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:56:45 compute-0 podman[76062]: 2026-01-31 07:56:45.426481184 +0000 UTC m=+0.506852278 container start 2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949 (image=quay.io/ceph/ceph:v20, name=sharp_panini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 07:56:45 compute-0 podman[76062]: 2026-01-31 07:56:45.446094831 +0000 UTC m=+0.526466015 container attach 2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949 (image=quay.io/ceph/ceph:v20, name=sharp_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:45 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'devicehealth'
Jan 31 07:56:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 07:56:45 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/420710602' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 07:56:45 compute-0 sharp_panini[76079]: {
Jan 31 07:56:45 compute-0 sharp_panini[76079]:     "epoch": 5,
Jan 31 07:56:45 compute-0 sharp_panini[76079]:     "available": true,
Jan 31 07:56:45 compute-0 sharp_panini[76079]:     "active_name": "compute-0.lhuavc",
Jan 31 07:56:45 compute-0 sharp_panini[76079]:     "num_standby": 0
Jan 31 07:56:45 compute-0 sharp_panini[76079]: }
Jan 31 07:56:45 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 07:56:45 compute-0 systemd[1]: libpod-2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949.scope: Deactivated successfully.
Jan 31 07:56:45 compute-0 podman[76062]: 2026-01-31 07:56:45.884865511 +0000 UTC m=+0.965236615 container died 2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949 (image=quay.io/ceph/ceph:v20, name=sharp_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:46 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 07:56:46 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 07:56:46 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]:   from numpy import show_config as show_numpy_config
Jan 31 07:56:46 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'influx'
Jan 31 07:56:46 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'insights'
Jan 31 07:56:46 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'iostat'
Jan 31 07:56:46 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'k8sevents'
Jan 31 07:56:46 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/420710602' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 07:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-86b26084e89e9c73455fd2aeeb2f5a1ca925b325b5a7fca0400678da3f4efaa5-merged.mount: Deactivated successfully.
Jan 31 07:56:46 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'localpool'
Jan 31 07:56:46 compute-0 podman[76062]: 2026-01-31 07:56:46.662956666 +0000 UTC m=+1.743327760 container remove 2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949 (image=quay.io/ceph/ceph:v20, name=sharp_panini, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 07:56:46 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 07:56:46 compute-0 podman[76116]: 2026-01-31 07:56:46.700616708 +0000 UTC m=+0.023440860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:46 compute-0 podman[76116]: 2026-01-31 07:56:46.813607594 +0000 UTC m=+0.136431726 container create f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90 (image=quay.io/ceph/ceph:v20, name=musing_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:46 compute-0 systemd[1]: Started libpod-conmon-f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90.scope.
Jan 31 07:56:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54355f48e390298caf2db0b102b5277f26ce4df778ffde8ae64c1a10f258da88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54355f48e390298caf2db0b102b5277f26ce4df778ffde8ae64c1a10f258da88/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54355f48e390298caf2db0b102b5277f26ce4df778ffde8ae64c1a10f258da88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:46 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'mirroring'
Jan 31 07:56:46 compute-0 systemd[1]: libpod-conmon-2687824ae2d2134374e31f1cbafb8464865643ab9a637ce17b25160ce8f37949.scope: Deactivated successfully.
Jan 31 07:56:47 compute-0 podman[76116]: 2026-01-31 07:56:47.010533795 +0000 UTC m=+0.333357947 container init f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90 (image=quay.io/ceph/ceph:v20, name=musing_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:47 compute-0 podman[76116]: 2026-01-31 07:56:47.017188963 +0000 UTC m=+0.340013105 container start f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90 (image=quay.io/ceph/ceph:v20, name=musing_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:56:47 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'nfs'
Jan 31 07:56:47 compute-0 podman[76116]: 2026-01-31 07:56:47.062633945 +0000 UTC m=+0.385458137 container attach f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90 (image=quay.io/ceph/ceph:v20, name=musing_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:56:47 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'orchestrator'
Jan 31 07:56:47 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 07:56:47 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'osd_support'
Jan 31 07:56:47 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 07:56:47 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'progress'
Jan 31 07:56:47 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'prometheus'
Jan 31 07:56:48 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'rbd_support'
Jan 31 07:56:48 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'rgw'
Jan 31 07:56:48 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'rook'
Jan 31 07:56:48 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'selftest'
Jan 31 07:56:49 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'smb'
Jan 31 07:56:49 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'snap_schedule'
Jan 31 07:56:49 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'stats'
Jan 31 07:56:49 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'status'
Jan 31 07:56:49 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'telegraf'
Jan 31 07:56:49 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'telemetry'
Jan 31 07:56:49 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: mgr[py] Loading python module 'volumes'
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Active manager daemon compute-0.lhuavc restarted
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lhuavc
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: ms_deliver_dispatch: unhandled message 0x5635e100c000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: mgr handle_mgr_map Activating!
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: mgr handle_mgr_map I am now activating
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.lhuavc(active, starting, since 0.401142s)
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.lhuavc", "id": "compute-0.lhuavc"} v 0)
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr metadata", "who": "compute-0.lhuavc", "id": "compute-0.lhuavc"} : dispatch
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 07:56:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: balancer
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Starting
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:50 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Manager daemon compute-0.lhuavc is now available
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_07:56:50
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 07:56:50 compute-0 ceph-mgr[75591]: [balancer INFO root] No pools available
Jan 31 07:56:50 compute-0 ceph-mon[75294]: Active manager daemon compute-0.lhuavc restarted
Jan 31 07:56:50 compute-0 ceph-mon[75294]: Activating manager daemon compute-0.lhuavc
Jan 31 07:56:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 31 07:56:51 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 31 07:56:52 compute-0 ceph-mon[75294]: osdmap e2: 0 total, 0 up, 0 in
Jan 31 07:56:52 compute-0 ceph-mon[75294]: mgrmap e6: compute-0.lhuavc(active, starting, since 0.401142s)
Jan 31 07:56:52 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 07:56:52 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr metadata", "who": "compute-0.lhuavc", "id": "compute-0.lhuavc"} : dispatch
Jan 31 07:56:52 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 07:56:52 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 07:56:52 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 07:56:52 compute-0 ceph-mon[75294]: Manager daemon compute-0.lhuavc is now available
Jan 31 07:56:52 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 07:56:52 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.lhuavc(active, since 2s)
Jan 31 07:56:52 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 07:56:52 compute-0 musing_ritchie[76133]: {
Jan 31 07:56:52 compute-0 musing_ritchie[76133]:     "mgrmap_epoch": 7,
Jan 31 07:56:52 compute-0 musing_ritchie[76133]:     "initialized": true
Jan 31 07:56:52 compute-0 musing_ritchie[76133]: }
Jan 31 07:56:52 compute-0 systemd[1]: libpod-f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90.scope: Deactivated successfully.
Jan 31 07:56:52 compute-0 podman[76116]: 2026-01-31 07:56:52.339554994 +0000 UTC m=+5.662379106 container died f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90 (image=quay.io/ceph/ceph:v20, name=musing_ritchie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:52 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 07:56:52 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 07:56:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 31 07:56:52 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:56:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 31 07:56:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019903778 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:56:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-54355f48e390298caf2db0b102b5277f26ce4df778ffde8ae64c1a10f258da88-merged.mount: Deactivated successfully.
Jan 31 07:56:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:53 compute-0 ceph-mon[75294]: mgrmap e7: compute-0.lhuavc(active, since 2s)
Jan 31 07:56:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:53 compute-0 ceph-mon[75294]: Found migration_current of "None". Setting to last migration.
Jan 31 07:56:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: cephadm
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: crash
Jan 31 07:56:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 07:56:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: devicehealth
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Starting
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: iostat
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: nfs
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: orchestrator
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: pg_autoscaler
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: progress
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [progress INFO root] Loading...
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [progress INFO root] No stored events to load
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [progress INFO root] Loaded [] historic events
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:56:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 07:56:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:54 compute-0 podman[76116]: 2026-01-31 07:56:54.73369734 +0000 UTC m=+8.056521492 container remove f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90 (image=quay.io/ceph/ceph:v20, name=musing_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] recovery thread starting
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] starting setup
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: rbd_support
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: status
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/mirror_snapshot_schedule"} v 0)
Jan 31 07:56:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/mirror_snapshot_schedule"} : dispatch
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: telemetry
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] PerfHandler: starting
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TaskHandler: starting
Jan 31 07:56:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/trash_purge_schedule"} v 0)
Jan 31 07:56:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/trash_purge_schedule"} : dispatch
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] setup complete
Jan 31 07:56:54 compute-0 ceph-mgr[75591]: mgr load Constructed class from module: volumes
Jan 31 07:56:54 compute-0 podman[76244]: 2026-01-31 07:56:54.775224635 +0000 UTC m=+0.020362338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:54 compute-0 podman[76244]: 2026-01-31 07:56:54.875916291 +0000 UTC m=+0.121054014 container create 19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5 (image=quay.io/ceph/ceph:v20, name=interesting_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 31 07:56:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/mirror_snapshot_schedule"} : dispatch
Jan 31 07:56:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lhuavc/trash_purge_schedule"} : dispatch
Jan 31 07:56:55 compute-0 systemd[1]: Started libpod-conmon-19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5.scope.
Jan 31 07:56:55 compute-0 systemd[1]: libpod-conmon-f1d93a7d534989f9f726dc5e07910d53c8b4d063a062bd42cd3804e3c1a1dd90.scope: Deactivated successfully.
Jan 31 07:56:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ce491a06544dacf9eaa4d94f53f40ac57b0850d923ce138be1747420345749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ce491a06544dacf9eaa4d94f53f40ac57b0850d923ce138be1747420345749/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ce491a06544dacf9eaa4d94f53f40ac57b0850d923ce138be1747420345749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:55 compute-0 podman[76244]: 2026-01-31 07:56:55.38986547 +0000 UTC m=+0.635003153 container init 19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5 (image=quay.io/ceph/ceph:v20, name=interesting_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 31 07:56:55 compute-0 podman[76244]: 2026-01-31 07:56:55.39767648 +0000 UTC m=+0.642814193 container start 19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5 (image=quay.io/ceph/ceph:v20, name=interesting_lamarr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 07:56:55 compute-0 podman[76244]: 2026-01-31 07:56:55.450254912 +0000 UTC m=+0.695392675 container attach 19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5 (image=quay.io/ceph/ceph:v20, name=interesting_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:56:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 31 07:56:55 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3751902496' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 07:56:56 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3751902496' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:56:56] ENGINE Bus STARTING
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:56:56] ENGINE Bus STARTING
Jan 31 07:56:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3751902496' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 07:56:56 compute-0 interesting_lamarr[76297]: module 'orchestrator' is already enabled (always-on)
Jan 31 07:56:56 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.lhuavc(active, since 6s)
Jan 31 07:56:56 compute-0 systemd[1]: libpod-19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5.scope: Deactivated successfully.
Jan 31 07:56:56 compute-0 podman[76244]: 2026-01-31 07:56:56.530921707 +0000 UTC m=+1.776059440 container died 19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5 (image=quay.io/ceph/ceph:v20, name=interesting_lamarr, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:56:56] ENGINE Serving on http://192.168.122.100:8765
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:56:56] ENGINE Serving on http://192.168.122.100:8765
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:56:56] ENGINE Serving on https://192.168.122.100:7150
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:56:56] ENGINE Serving on https://192.168.122.100:7150
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:56:56] ENGINE Bus STARTED
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:56:56] ENGINE Bus STARTED
Jan 31 07:56:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 07:56:56 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:56:56] ENGINE Client ('192.168.122.100', 60382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:56:56] ENGINE Client ('192.168.122.100', 60382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:56:56 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:56:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-38ce491a06544dacf9eaa4d94f53f40ac57b0850d923ce138be1747420345749-merged.mount: Deactivated successfully.
Jan 31 07:56:57 compute-0 podman[76244]: 2026-01-31 07:56:57.406860943 +0000 UTC m=+2.651998616 container remove 19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5 (image=quay.io/ceph/ceph:v20, name=interesting_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:57 compute-0 podman[76359]: 2026-01-31 07:56:57.444468053 +0000 UTC m=+0.018104458 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:57 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.lhuavc(active, since 7s)
Jan 31 07:56:58 compute-0 podman[76359]: 2026-01-31 07:56:58.070196105 +0000 UTC m=+0.643832480 container create a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15 (image=quay.io/ceph/ceph:v20, name=naughty_cartwright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:58 compute-0 ceph-mon[75294]: [31/Jan/2026:07:56:56] ENGINE Bus STARTING
Jan 31 07:56:58 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3751902496' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 07:56:58 compute-0 ceph-mon[75294]: mgrmap e8: compute-0.lhuavc(active, since 6s)
Jan 31 07:56:58 compute-0 ceph-mon[75294]: [31/Jan/2026:07:56:56] ENGINE Serving on http://192.168.122.100:8765
Jan 31 07:56:58 compute-0 ceph-mon[75294]: [31/Jan/2026:07:56:56] ENGINE Serving on https://192.168.122.100:7150
Jan 31 07:56:58 compute-0 ceph-mon[75294]: [31/Jan/2026:07:56:56] ENGINE Bus STARTED
Jan 31 07:56:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:58 compute-0 ceph-mon[75294]: [31/Jan/2026:07:56:56] ENGINE Client ('192.168.122.100', 60382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 07:56:58 compute-0 systemd[1]: Started libpod-conmon-a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15.scope.
Jan 31 07:56:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce817f270d7fc118be690eaef18fad2b83f768b1b36d8a2e125caf97023b47d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce817f270d7fc118be690eaef18fad2b83f768b1b36d8a2e125caf97023b47d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce817f270d7fc118be690eaef18fad2b83f768b1b36d8a2e125caf97023b47d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:58 compute-0 podman[76359]: 2026-01-31 07:56:58.443412572 +0000 UTC m=+1.017048967 container init a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15 (image=quay.io/ceph/ceph:v20, name=naughty_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:58 compute-0 podman[76359]: 2026-01-31 07:56:58.451295603 +0000 UTC m=+1.024931988 container start a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15 (image=quay.io/ceph/ceph:v20, name=naughty_cartwright, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:56:58 compute-0 podman[76359]: 2026-01-31 07:56:58.684853589 +0000 UTC m=+1.258490004 container attach a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15 (image=quay.io/ceph/ceph:v20, name=naughty_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:56:58 compute-0 systemd[1]: libpod-conmon-19a3d3e392be51dae68d83f8cbcaf41d3f465ed3163fd975049701a69ef90eb5.scope: Deactivated successfully.
Jan 31 07:56:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052648 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:56:58 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:56:58 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:56:58 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:56:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 31 07:56:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 07:56:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:58 compute-0 systemd[1]: libpod-a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15.scope: Deactivated successfully.
Jan 31 07:56:58 compute-0 podman[76359]: 2026-01-31 07:56:58.947909637 +0000 UTC m=+1.521546022 container died a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15 (image=quay.io/ceph/ceph:v20, name=naughty_cartwright, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ce817f270d7fc118be690eaef18fad2b83f768b1b36d8a2e125caf97023b47d-merged.mount: Deactivated successfully.
Jan 31 07:56:59 compute-0 ceph-mon[75294]: mgrmap e9: compute-0.lhuavc(active, since 7s)
Jan 31 07:56:59 compute-0 ceph-mon[75294]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:56:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:56:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:56:59 compute-0 podman[76359]: 2026-01-31 07:56:59.657422349 +0000 UTC m=+2.231058734 container remove a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15 (image=quay.io/ceph/ceph:v20, name=naughty_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:56:59 compute-0 podman[76413]: 2026-01-31 07:56:59.687711083 +0000 UTC m=+0.017760419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:56:59 compute-0 podman[76413]: 2026-01-31 07:56:59.862103818 +0000 UTC m=+0.192153134 container create fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4 (image=quay.io/ceph/ceph:v20, name=festive_wu, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:59 compute-0 systemd[1]: Started libpod-conmon-fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4.scope.
Jan 31 07:56:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e108f6268e9e327895be517fc2ebdb48c37bfb11ed68c87024a1cd9e6cbddff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e108f6268e9e327895be517fc2ebdb48c37bfb11ed68c87024a1cd9e6cbddff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e108f6268e9e327895be517fc2ebdb48c37bfb11ed68c87024a1cd9e6cbddff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:00 compute-0 podman[76413]: 2026-01-31 07:57:00.051361334 +0000 UTC m=+0.381410670 container init fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4 (image=quay.io/ceph/ceph:v20, name=festive_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 07:57:00 compute-0 podman[76413]: 2026-01-31 07:57:00.057026656 +0000 UTC m=+0.387076012 container start fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4 (image=quay.io/ceph/ceph:v20, name=festive_wu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:57:00 compute-0 podman[76413]: 2026-01-31 07:57:00.150927549 +0000 UTC m=+0.480976855 container attach fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4 (image=quay.io/ceph/ceph:v20, name=festive_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:00 compute-0 systemd[1]: libpod-conmon-a4871d802de1afec94914c9f3ec2771779c4a2b6c56f4608a3c8ef7fb49a7b15.scope: Deactivated successfully.
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 31 07:57:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: [cephadm INFO root] Set ssh ssh_user
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 07:57:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 31 07:57:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: [cephadm INFO root] Set ssh ssh_config
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 07:57:00 compute-0 festive_wu[76429]: ssh user set to ceph-admin. sudo will be used
Jan 31 07:57:00 compute-0 systemd[1]: libpod-fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4.scope: Deactivated successfully.
Jan 31 07:57:00 compute-0 podman[76413]: 2026-01-31 07:57:00.633617678 +0000 UTC m=+0.963666994 container died fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4 (image=quay.io/ceph/ceph:v20, name=festive_wu, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:00 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:57:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e108f6268e9e327895be517fc2ebdb48c37bfb11ed68c87024a1cd9e6cbddff-merged.mount: Deactivated successfully.
Jan 31 07:57:00 compute-0 podman[76413]: 2026-01-31 07:57:00.921644966 +0000 UTC m=+1.251694292 container remove fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4 (image=quay.io/ceph/ceph:v20, name=festive_wu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:00 compute-0 systemd[1]: libpod-conmon-fd8195cfc2ca65a1d45793802f03e3ef799729f86ec5c4b64203224e1ed929f4.scope: Deactivated successfully.
Jan 31 07:57:01 compute-0 podman[76469]: 2026-01-31 07:57:00.954084758 +0000 UTC m=+0.021181620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:01 compute-0 podman[76469]: 2026-01-31 07:57:01.10900133 +0000 UTC m=+0.176098192 container create 1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4 (image=quay.io/ceph/ceph:v20, name=mystifying_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:01 compute-0 systemd[1]: Started libpod-conmon-1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4.scope.
Jan 31 07:57:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1fa76730b6220552a80ba5dbbe304d97c467277ee21adefe9ee59cfada0693/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1fa76730b6220552a80ba5dbbe304d97c467277ee21adefe9ee59cfada0693/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1fa76730b6220552a80ba5dbbe304d97c467277ee21adefe9ee59cfada0693/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1fa76730b6220552a80ba5dbbe304d97c467277ee21adefe9ee59cfada0693/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1fa76730b6220552a80ba5dbbe304d97c467277ee21adefe9ee59cfada0693/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:01 compute-0 podman[76469]: 2026-01-31 07:57:01.348450254 +0000 UTC m=+0.415547126 container init 1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4 (image=quay.io/ceph/ceph:v20, name=mystifying_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:01 compute-0 podman[76469]: 2026-01-31 07:57:01.352288587 +0000 UTC m=+0.419385439 container start 1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4 (image=quay.io/ceph/ceph:v20, name=mystifying_sinoussi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:01 compute-0 podman[76469]: 2026-01-31 07:57:01.415595148 +0000 UTC m=+0.482692020 container attach 1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4 (image=quay.io/ceph/ceph:v20, name=mystifying_sinoussi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:57:01 compute-0 ceph-mon[75294]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:01 compute-0 ceph-mon[75294]: Set ssh ssh_user
Jan 31 07:57:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:01 compute-0 ceph-mon[75294]: Set ssh ssh_config
Jan 31 07:57:01 compute-0 ceph-mon[75294]: ssh user set to ceph-admin. sudo will be used
Jan 31 07:57:01 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 31 07:57:01 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:01 compute-0 ceph-mgr[75591]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 07:57:01 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 07:57:01 compute-0 ceph-mgr[75591]: [cephadm INFO root] Set ssh private key
Jan 31 07:57:01 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 07:57:01 compute-0 systemd[1]: libpod-1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4.scope: Deactivated successfully.
Jan 31 07:57:01 compute-0 podman[76469]: 2026-01-31 07:57:01.759410646 +0000 UTC m=+0.826507498 container died 1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4 (image=quay.io/ceph/ceph:v20, name=mystifying_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e1fa76730b6220552a80ba5dbbe304d97c467277ee21adefe9ee59cfada0693-merged.mount: Deactivated successfully.
Jan 31 07:57:02 compute-0 podman[76469]: 2026-01-31 07:57:02.040306822 +0000 UTC m=+1.107403674 container remove 1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4 (image=quay.io/ceph/ceph:v20, name=mystifying_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:02 compute-0 podman[76524]: 2026-01-31 07:57:02.129329305 +0000 UTC m=+0.075466149 container create d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5 (image=quay.io/ceph/ceph:v20, name=naughty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:02 compute-0 podman[76524]: 2026-01-31 07:57:02.074066189 +0000 UTC m=+0.020203033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:02 compute-0 systemd[1]: Started libpod-conmon-d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5.scope.
Jan 31 07:57:02 compute-0 systemd[1]: libpod-conmon-1e1db5f6444546fb4429a597936b645392f510f575d358f9f54efc9d3b29a2f4.scope: Deactivated successfully.
Jan 31 07:57:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b82819756b85acbc04353834d28c5112e6e62d02756cde7ffe62f6fe20b18b/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b82819756b85acbc04353834d28c5112e6e62d02756cde7ffe62f6fe20b18b/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b82819756b85acbc04353834d28c5112e6e62d02756cde7ffe62f6fe20b18b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b82819756b85acbc04353834d28c5112e6e62d02756cde7ffe62f6fe20b18b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b82819756b85acbc04353834d28c5112e6e62d02756cde7ffe62f6fe20b18b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:02 compute-0 podman[76524]: 2026-01-31 07:57:02.328595288 +0000 UTC m=+0.274732212 container init d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5 (image=quay.io/ceph/ceph:v20, name=naughty_mirzakhani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 31 07:57:02 compute-0 podman[76524]: 2026-01-31 07:57:02.332255917 +0000 UTC m=+0.278392781 container start d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5 (image=quay.io/ceph/ceph:v20, name=naughty_mirzakhani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:57:02 compute-0 podman[76524]: 2026-01-31 07:57:02.482857093 +0000 UTC m=+0.428993917 container attach d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5 (image=quay.io/ceph/ceph:v20, name=naughty_mirzakhani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:02 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 31 07:57:02 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:02 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:57:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:03 compute-0 ceph-mgr[75591]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 07:57:03 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 07:57:03 compute-0 systemd[1]: libpod-d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5.scope: Deactivated successfully.
Jan 31 07:57:03 compute-0 podman[76524]: 2026-01-31 07:57:03.032996014 +0000 UTC m=+0.979132878 container died d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5 (image=quay.io/ceph/ceph:v20, name=naughty_mirzakhani, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:03 compute-0 ceph-mon[75294]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:03 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:03 compute-0 ceph-mon[75294]: Set ssh ssh_identity_key
Jan 31 07:57:03 compute-0 ceph-mon[75294]: Set ssh private key
Jan 31 07:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-67b82819756b85acbc04353834d28c5112e6e62d02756cde7ffe62f6fe20b18b-merged.mount: Deactivated successfully.
Jan 31 07:57:03 compute-0 podman[76524]: 2026-01-31 07:57:03.444021517 +0000 UTC m=+1.390158371 container remove d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5 (image=quay.io/ceph/ceph:v20, name=naughty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:57:03 compute-0 systemd[1]: libpod-conmon-d69514fff36b9d1c3886f2f6f828cf36f10416c7dba71732803b0d6b3093ebe5.scope: Deactivated successfully.
Jan 31 07:57:03 compute-0 podman[76579]: 2026-01-31 07:57:03.505428577 +0000 UTC m=+0.043602083 container create 521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05 (image=quay.io/ceph/ceph:v20, name=practical_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:03 compute-0 systemd[1]: Started libpod-conmon-521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05.scope.
Jan 31 07:57:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb9c401b32c3fc697e2c5d6eff160795c18b7a87fada6204878b16277e97f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb9c401b32c3fc697e2c5d6eff160795c18b7a87fada6204878b16277e97f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb9c401b32c3fc697e2c5d6eff160795c18b7a87fada6204878b16277e97f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:03 compute-0 podman[76579]: 2026-01-31 07:57:03.485718007 +0000 UTC m=+0.023891533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:03 compute-0 podman[76579]: 2026-01-31 07:57:03.661465039 +0000 UTC m=+0.199638555 container init 521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05 (image=quay.io/ceph/ceph:v20, name=practical_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:57:03 compute-0 podman[76579]: 2026-01-31 07:57:03.668553939 +0000 UTC m=+0.206727465 container start 521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05 (image=quay.io/ceph/ceph:v20, name=practical_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:57:03 compute-0 podman[76579]: 2026-01-31 07:57:03.693958452 +0000 UTC m=+0.232131988 container attach 521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05 (image=quay.io/ceph/ceph:v20, name=practical_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:57:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:04 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:04 compute-0 practical_nash[76595]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChIkVIVX6aLjNi98bE0fmmrYTA3p/ASDZM8buqrh/D/nPxXBMhiXT9lYRvil668Ky42FCe6DbiLaTnCQG+B4fko0hGLyGZp/qqggJEai299Pfoa2t2EvIidYa8NSJXf8Kd5a+RuKcUtBuXA3IJOVjY31AQazIlI7jFsXQSF1W1A/xoXDd/gkJl92QgYbiN5kruK7PdYfcp45XSrQSA+T+Tt0tGkhADkUE+KBnWPnK7psjtdNqLurWs8wFJOOQhALTMivB3C6imOMXtaxsgDU845SLOARg2dO/Z6vMaqKDSaHyucuynK7rptTRRB50ka+K+pTQ3XXiviNQvlN9A8CO6htP1rEPuRaNDsEIiwSg71htcnj++7zIJ6mmn8/Zxzo3QKQCBXKVzGukvZV+PERI3fn4RGK1dyXlQ4nnLiv0Q7ydq0F3IZFLsJg2dASMyajDaVGiQRS1drWdhK0pDC8nn+R4vrbwSAmbfBtfdIC4dZ5hg4kiZFXtEAEt4sa+g1hE= zuul@controller
Jan 31 07:57:04 compute-0 systemd[1]: libpod-521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05.scope: Deactivated successfully.
Jan 31 07:57:04 compute-0 conmon[76595]: conmon 521fb43151673a6c37b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05.scope/container/memory.events
Jan 31 07:57:04 compute-0 podman[76579]: 2026-01-31 07:57:04.073728585 +0000 UTC m=+0.611902101 container died 521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05 (image=quay.io/ceph/ceph:v20, name=practical_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:04 compute-0 ceph-mon[75294]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:04 compute-0 ceph-mon[75294]: Set ssh ssh_identity_pub
Jan 31 07:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-16fb9c401b32c3fc697e2c5d6eff160795c18b7a87fada6204878b16277e97f1-merged.mount: Deactivated successfully.
Jan 31 07:57:04 compute-0 podman[76579]: 2026-01-31 07:57:04.673738906 +0000 UTC m=+1.211912432 container remove 521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05 (image=quay.io/ceph/ceph:v20, name=practical_nash, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:04 compute-0 systemd[1]: libpod-conmon-521fb43151673a6c37b2058313a606fe886932352125e7b0ae4bf7d485fe2c05.scope: Deactivated successfully.
Jan 31 07:57:04 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:04 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:57:04 compute-0 podman[76635]: 2026-01-31 07:57:04.786345991 +0000 UTC m=+0.099630347 container create 0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3 (image=quay.io/ceph/ceph:v20, name=fervent_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:57:04 compute-0 podman[76635]: 2026-01-31 07:57:04.701451691 +0000 UTC m=+0.014736067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:04 compute-0 systemd[1]: Started libpod-conmon-0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3.scope.
Jan 31 07:57:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823cdee375fd724daaa1f519512805ca1105d29552546dbb96459a894b155319/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823cdee375fd724daaa1f519512805ca1105d29552546dbb96459a894b155319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823cdee375fd724daaa1f519512805ca1105d29552546dbb96459a894b155319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:04 compute-0 podman[76635]: 2026-01-31 07:57:04.900441437 +0000 UTC m=+0.213725813 container init 0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3 (image=quay.io/ceph/ceph:v20, name=fervent_stonebraker, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:57:04 compute-0 podman[76635]: 2026-01-31 07:57:04.907072956 +0000 UTC m=+0.220357352 container start 0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3 (image=quay.io/ceph/ceph:v20, name=fervent_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:57:04 compute-0 podman[76635]: 2026-01-31 07:57:04.929774006 +0000 UTC m=+0.243058392 container attach 0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3 (image=quay.io/ceph/ceph:v20, name=fervent_stonebraker, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 07:57:05 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:05 compute-0 ceph-mon[75294]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:05 compute-0 sshd-session[76677]: Accepted publickey for ceph-admin from 192.168.122.100 port 48054 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:05 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 07:57:05 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 07:57:05 compute-0 systemd-logind[810]: New session 21 of user ceph-admin.
Jan 31 07:57:05 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 07:57:05 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 07:57:05 compute-0 systemd[76681]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:05 compute-0 systemd[76681]: Queued start job for default target Main User Target.
Jan 31 07:57:05 compute-0 systemd[76681]: Created slice User Application Slice.
Jan 31 07:57:05 compute-0 systemd[76681]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:57:05 compute-0 systemd[76681]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:57:05 compute-0 systemd[76681]: Reached target Paths.
Jan 31 07:57:05 compute-0 systemd[76681]: Reached target Timers.
Jan 31 07:57:05 compute-0 systemd[76681]: Starting D-Bus User Message Bus Socket...
Jan 31 07:57:05 compute-0 systemd[76681]: Starting Create User's Volatile Files and Directories...
Jan 31 07:57:05 compute-0 sshd-session[76694]: Accepted publickey for ceph-admin from 192.168.122.100 port 48056 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:05 compute-0 systemd[76681]: Finished Create User's Volatile Files and Directories.
Jan 31 07:57:05 compute-0 systemd[76681]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:57:05 compute-0 systemd[76681]: Reached target Sockets.
Jan 31 07:57:05 compute-0 systemd[76681]: Reached target Basic System.
Jan 31 07:57:05 compute-0 systemd[76681]: Reached target Main User Target.
Jan 31 07:57:05 compute-0 systemd[76681]: Startup finished in 112ms.
Jan 31 07:57:05 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 07:57:05 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 31 07:57:05 compute-0 sshd-session[76677]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:05 compute-0 systemd-logind[810]: New session 23 of user ceph-admin.
Jan 31 07:57:05 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 07:57:05 compute-0 sshd-session[76694]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:05 compute-0 sudo[76701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:05 compute-0 sudo[76701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:05 compute-0 sudo[76701]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:06 compute-0 sshd-session[76726]: Accepted publickey for ceph-admin from 192.168.122.100 port 48064 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:06 compute-0 systemd-logind[810]: New session 24 of user ceph-admin.
Jan 31 07:57:06 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 07:57:06 compute-0 sshd-session[76726]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:06 compute-0 sudo[76730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 07:57:06 compute-0 sudo[76730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:06 compute-0 sudo[76730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:06 compute-0 sshd-session[76755]: Accepted publickey for ceph-admin from 192.168.122.100 port 48072 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:06 compute-0 systemd-logind[810]: New session 25 of user ceph-admin.
Jan 31 07:57:06 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 07:57:06 compute-0 sshd-session[76755]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:06 compute-0 ceph-mon[75294]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:06 compute-0 sudo[76759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 31 07:57:06 compute-0 sudo[76759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:06 compute-0 sudo[76759]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:06 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 07:57:06 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 07:57:06 compute-0 sshd-session[76784]: Accepted publickey for ceph-admin from 192.168.122.100 port 48080 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:06 compute-0 systemd-logind[810]: New session 26 of user ceph-admin.
Jan 31 07:57:06 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 07:57:06 compute-0 sshd-session[76784]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:06 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:06 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:57:06 compute-0 sudo[76788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:06 compute-0 sudo[76788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:06 compute-0 sudo[76788]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:06 compute-0 sshd-session[76813]: Accepted publickey for ceph-admin from 192.168.122.100 port 48094 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:06 compute-0 systemd-logind[810]: New session 27 of user ceph-admin.
Jan 31 07:57:07 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 07:57:07 compute-0 sshd-session[76813]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:07 compute-0 sudo[76817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:07 compute-0 sudo[76817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:07 compute-0 sudo[76817]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:07 compute-0 sshd-session[76842]: Accepted publickey for ceph-admin from 192.168.122.100 port 48096 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:07 compute-0 systemd-logind[810]: New session 28 of user ceph-admin.
Jan 31 07:57:07 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 07:57:07 compute-0 sshd-session[76842]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:07 compute-0 sudo[76846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 31 07:57:07 compute-0 sudo[76846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:07 compute-0 sudo[76846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:07 compute-0 ceph-mon[75294]: Deploying cephadm binary to compute-0
Jan 31 07:57:07 compute-0 sshd-session[76871]: Accepted publickey for ceph-admin from 192.168.122.100 port 48110 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:07 compute-0 systemd-logind[810]: New session 29 of user ceph-admin.
Jan 31 07:57:07 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 07:57:07 compute-0 sshd-session[76871]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:07 compute-0 sudo[76875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:07 compute-0 sudo[76875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:07 compute-0 sudo[76875]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:07 compute-0 sshd-session[76900]: Accepted publickey for ceph-admin from 192.168.122.100 port 48112 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:07 compute-0 systemd-logind[810]: New session 30 of user ceph-admin.
Jan 31 07:57:07 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 07:57:07 compute-0 sshd-session[76900]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:08 compute-0 sudo[76904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 31 07:57:08 compute-0 sudo[76904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:08 compute-0 sudo[76904]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:08 compute-0 sshd-session[76929]: Accepted publickey for ceph-admin from 192.168.122.100 port 48122 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:08 compute-0 systemd-logind[810]: New session 31 of user ceph-admin.
Jan 31 07:57:08 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 07:57:08 compute-0 sshd-session[76929]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:08 compute-0 ceph-mgr[75591]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:57:08 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:09 compute-0 sshd-session[76956]: Accepted publickey for ceph-admin from 192.168.122.100 port 48130 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:09 compute-0 systemd-logind[810]: New session 32 of user ceph-admin.
Jan 31 07:57:09 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 07:57:09 compute-0 sshd-session[76956]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:09 compute-0 sudo[76960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 31 07:57:09 compute-0 sudo[76960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:09 compute-0 sudo[76960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:09 compute-0 sshd-session[76985]: Accepted publickey for ceph-admin from 192.168.122.100 port 48144 ssh2: RSA SHA256:6fgAf6twLsApDNqAjXH7g1lIMP5vqkKvsstOpvGDfiY
Jan 31 07:57:09 compute-0 systemd-logind[810]: New session 33 of user ceph-admin.
Jan 31 07:57:10 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 31 07:57:10 compute-0 sshd-session[76985]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:57:10 compute-0 sudo[76989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 07:57:10 compute-0 sudo[76989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:10 compute-0 sudo[76989]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 07:57:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:10 compute-0 ceph-mgr[75591]: [cephadm INFO root] Added host compute-0
Jan 31 07:57:10 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 07:57:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 07:57:10 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:57:10 compute-0 fervent_stonebraker[76651]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 07:57:10 compute-0 systemd[1]: libpod-0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3.scope: Deactivated successfully.
Jan 31 07:57:10 compute-0 podman[76635]: 2026-01-31 07:57:10.596291152 +0000 UTC m=+5.909575548 container died 0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3 (image=quay.io/ceph/ceph:v20, name=fervent_stonebraker, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:10 compute-0 sudo[77034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:10 compute-0 sudo[77034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:10 compute-0 sudo[77034]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:10 compute-0 sudo[77067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 31 07:57:10 compute-0 sudo[77067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:10 compute-0 ceph-mgr[75591]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 07:57:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:10 compute-0 ceph-mon[75294]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 07:57:10 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-823cdee375fd724daaa1f519512805ca1105d29552546dbb96459a894b155319-merged.mount: Deactivated successfully.
Jan 31 07:57:11 compute-0 podman[76635]: 2026-01-31 07:57:11.772433852 +0000 UTC m=+7.085718238 container remove 0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3 (image=quay.io/ceph/ceph:v20, name=fervent_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 07:57:11 compute-0 systemd[1]: libpod-conmon-0fe88be95d0f4b1cfd7eb09c5caa7c42eb17343c99633b4f4792b735b4c7a4a3.scope: Deactivated successfully.
Jan 31 07:57:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:11 compute-0 ceph-mon[75294]: Added host compute-0
Jan 31 07:57:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 07:57:11 compute-0 ceph-mon[75294]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:11 compute-0 ceph-mon[75294]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 07:57:11 compute-0 podman[77123]: 2026-01-31 07:57:11.813601978 +0000 UTC m=+0.022441424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:11 compute-0 podman[77123]: 2026-01-31 07:57:11.981173631 +0000 UTC m=+0.190013067 container create f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9 (image=quay.io/ceph/ceph:v20, name=crazy_hoover, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:57:12 compute-0 systemd[1]: Started libpod-conmon-f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9.scope.
Jan 31 07:57:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5559edee59a4bfb19837de0cdae9f2000674876529dfbc7b367c59352916b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5559edee59a4bfb19837de0cdae9f2000674876529dfbc7b367c59352916b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5559edee59a4bfb19837de0cdae9f2000674876529dfbc7b367c59352916b0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:12 compute-0 podman[77123]: 2026-01-31 07:57:12.225920687 +0000 UTC m=+0.434760143 container init f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9 (image=quay.io/ceph/ceph:v20, name=crazy_hoover, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:12 compute-0 podman[77123]: 2026-01-31 07:57:12.233466819 +0000 UTC m=+0.442306245 container start f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9 (image=quay.io/ceph/ceph:v20, name=crazy_hoover, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:12 compute-0 podman[77123]: 2026-01-31 07:57:12.36487708 +0000 UTC m=+0.573716536 container attach f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9 (image=quay.io/ceph/ceph:v20, name=crazy_hoover, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:12 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:12 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 07:57:12 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 07:57:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 07:57:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:12 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:12 compute-0 crazy_hoover[77139]: Scheduled mon update...
Jan 31 07:57:12 compute-0 systemd[1]: libpod-f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9.scope: Deactivated successfully.
Jan 31 07:57:12 compute-0 podman[77123]: 2026-01-31 07:57:12.876569988 +0000 UTC m=+1.085409414 container died f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9 (image=quay.io/ceph/ceph:v20, name=crazy_hoover, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba5559edee59a4bfb19837de0cdae9f2000674876529dfbc7b367c59352916b0-merged.mount: Deactivated successfully.
Jan 31 07:57:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:13 compute-0 podman[77109]: 2026-01-31 07:57:13.972929693 +0000 UTC m=+2.883720888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:14 compute-0 ceph-mon[75294]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:14 compute-0 ceph-mon[75294]: Saving service mon spec with placement count:5
Jan 31 07:57:14 compute-0 ceph-mon[75294]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:14 compute-0 podman[77123]: 2026-01-31 07:57:14.39024651 +0000 UTC m=+2.599085936 container remove f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9 (image=quay.io/ceph/ceph:v20, name=crazy_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:14 compute-0 systemd[1]: libpod-conmon-f7916937ffb980b32218a56b5d17b8703e509b3c6b185e7e4399c33707df0bd9.scope: Deactivated successfully.
Jan 31 07:57:14 compute-0 podman[77191]: 2026-01-31 07:57:14.446861619 +0000 UTC m=+0.033742058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:14 compute-0 podman[77191]: 2026-01-31 07:57:14.594043531 +0000 UTC m=+0.180923990 container create d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477 (image=quay.io/ceph/ceph:v20, name=charming_swirles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:14 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:14 compute-0 systemd[1]: Started libpod-conmon-d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477.scope.
Jan 31 07:57:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228491e72af5c1f265405dfbe3e424166d3f1dc7b1d7f96b198c0cdf1bdfaf82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228491e72af5c1f265405dfbe3e424166d3f1dc7b1d7f96b198c0cdf1bdfaf82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228491e72af5c1f265405dfbe3e424166d3f1dc7b1d7f96b198c0cdf1bdfaf82/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:14 compute-0 podman[77191]: 2026-01-31 07:57:14.900423272 +0000 UTC m=+0.487303731 container init d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477 (image=quay.io/ceph/ceph:v20, name=charming_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:14 compute-0 podman[77191]: 2026-01-31 07:57:14.90624787 +0000 UTC m=+0.493128329 container start d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477 (image=quay.io/ceph/ceph:v20, name=charming_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:15 compute-0 podman[77191]: 2026-01-31 07:57:15.10334816 +0000 UTC m=+0.690228619 container attach d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477 (image=quay.io/ceph/ceph:v20, name=charming_swirles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:15 compute-0 podman[77203]: 2026-01-31 07:57:15.136137891 +0000 UTC m=+0.666408451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:15 compute-0 podman[77203]: 2026-01-31 07:57:15.311965102 +0000 UTC m=+0.842235622 container create 37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe (image=quay.io/ceph/ceph:v20, name=relaxed_engelbart, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:15 compute-0 ceph-mon[75294]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:15 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:15 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 07:57:15 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 07:57:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 07:57:15 compute-0 systemd[1]: Started libpod-conmon-37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe.scope.
Jan 31 07:57:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:15 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:15 compute-0 charming_swirles[77220]: Scheduled mgr update...
Jan 31 07:57:15 compute-0 systemd[1]: libpod-d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477.scope: Deactivated successfully.
Jan 31 07:57:15 compute-0 podman[77203]: 2026-01-31 07:57:15.5351329 +0000 UTC m=+1.065403420 container init 37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe (image=quay.io/ceph/ceph:v20, name=relaxed_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:57:15 compute-0 podman[77191]: 2026-01-31 07:57:15.53586214 +0000 UTC m=+1.122742569 container died d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477 (image=quay.io/ceph/ceph:v20, name=charming_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:57:15 compute-0 podman[77203]: 2026-01-31 07:57:15.541083022 +0000 UTC m=+1.071353542 container start 37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe (image=quay.io/ceph/ceph:v20, name=relaxed_engelbart, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:15 compute-0 relaxed_engelbart[77249]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 07:57:15 compute-0 systemd[1]: libpod-37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe.scope: Deactivated successfully.
Jan 31 07:57:15 compute-0 conmon[77249]: conmon 37c59a6c43ed9d005b46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe.scope/container/memory.events
Jan 31 07:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-228491e72af5c1f265405dfbe3e424166d3f1dc7b1d7f96b198c0cdf1bdfaf82-merged.mount: Deactivated successfully.
Jan 31 07:57:16 compute-0 podman[77191]: 2026-01-31 07:57:16.059298353 +0000 UTC m=+1.646178782 container remove d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477 (image=quay.io/ceph/ceph:v20, name=charming_swirles, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:16 compute-0 systemd[1]: libpod-conmon-d4941f9141fa14db7262b04ad753a07f9c20cdeb14adaeb4382ae1bfc4226477.scope: Deactivated successfully.
Jan 31 07:57:16 compute-0 podman[77203]: 2026-01-31 07:57:16.130561641 +0000 UTC m=+1.660832191 container attach 37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe (image=quay.io/ceph/ceph:v20, name=relaxed_engelbart, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 07:57:16 compute-0 podman[77203]: 2026-01-31 07:57:16.131106425 +0000 UTC m=+1.661376985 container died 37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe (image=quay.io/ceph/ceph:v20, name=relaxed_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-718744508f65456df44ee41dc39ba7fcb64029977503de212df475bc28d78b0b-merged.mount: Deactivated successfully.
Jan 31 07:57:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:16 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:16 compute-0 ceph-mon[75294]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:16 compute-0 ceph-mon[75294]: Saving service mgr spec with placement count:2
Jan 31 07:57:16 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:17 compute-0 podman[77203]: 2026-01-31 07:57:17.081385185 +0000 UTC m=+2.611655735 container remove 37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe (image=quay.io/ceph/ceph:v20, name=relaxed_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:17 compute-0 sudo[77067]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:17 compute-0 systemd[1]: libpod-conmon-37c59a6c43ed9d005b46f8cfa11e99a36bc686ebcca1aaba28d73b34d2162dbe.scope: Deactivated successfully.
Jan 31 07:57:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 31 07:57:17 compute-0 podman[77280]: 2026-01-31 07:57:17.131021144 +0000 UTC m=+1.053866857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:17 compute-0 podman[77280]: 2026-01-31 07:57:17.428184234 +0000 UTC m=+1.351029937 container create 271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22 (image=quay.io/ceph/ceph:v20, name=sharp_boyd, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 07:57:17 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:17 compute-0 sudo[77295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:17 compute-0 sudo[77295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:17 compute-0 sudo[77295]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:17 compute-0 systemd[1]: Started libpod-conmon-271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22.scope.
Jan 31 07:57:17 compute-0 sudo[77320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 07:57:17 compute-0 sudo[77320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a10cad0d517fd00452edfb1f18492874a18c857bc9313f606fff421792820/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a10cad0d517fd00452edfb1f18492874a18c857bc9313f606fff421792820/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211a10cad0d517fd00452edfb1f18492874a18c857bc9313f606fff421792820/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:17 compute-0 podman[77280]: 2026-01-31 07:57:17.777153663 +0000 UTC m=+1.699999396 container init 271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22 (image=quay.io/ceph/ceph:v20, name=sharp_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:17 compute-0 podman[77280]: 2026-01-31 07:57:17.783812953 +0000 UTC m=+1.706658626 container start 271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22 (image=quay.io/ceph/ceph:v20, name=sharp_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:57:17 compute-0 ceph-mon[75294]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:17 compute-0 podman[77280]: 2026-01-31 07:57:17.87819436 +0000 UTC m=+1.801040023 container attach 271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22 (image=quay.io/ceph/ceph:v20, name=sharp_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:17 compute-0 sudo[77320]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:18 compute-0 sudo[77391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:18 compute-0 sudo[77391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:18 compute-0 sudo[77391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:18 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:18 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 07:57:18 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 07:57:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 07:57:18 compute-0 sudo[77416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 07:57:18 compute-0 sudo[77416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:18 compute-0 sharp_boyd[77346]: Scheduled crash update...
Jan 31 07:57:18 compute-0 systemd[1]: libpod-271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22.scope: Deactivated successfully.
Jan 31 07:57:18 compute-0 podman[77280]: 2026-01-31 07:57:18.330812757 +0000 UTC m=+2.253658460 container died 271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22 (image=quay.io/ceph/ceph:v20, name=sharp_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-211a10cad0d517fd00452edfb1f18492874a18c857bc9313f606fff421792820-merged.mount: Deactivated successfully.
Jan 31 07:57:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:18 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:18 compute-0 podman[77280]: 2026-01-31 07:57:18.994718599 +0000 UTC m=+2.917564282 container remove 271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22 (image=quay.io/ceph/ceph:v20, name=sharp_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 07:57:19 compute-0 podman[77483]: 2026-01-31 07:57:19.048323677 +0000 UTC m=+0.029375690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:19 compute-0 podman[77483]: 2026-01-31 07:57:19.323515611 +0000 UTC m=+0.304567604 container create 67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41 (image=quay.io/ceph/ceph:v20, name=hopeful_cori, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 07:57:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:19 compute-0 ceph-mon[75294]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:19 compute-0 ceph-mon[75294]: Saving service crash spec with placement *
Jan 31 07:57:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:19 compute-0 ceph-mon[75294]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:19 compute-0 systemd[1]: Started libpod-conmon-67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41.scope.
Jan 31 07:57:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f70108ffe98c7c6429766819a9e971bb9a0b1726b31a2859cc9be8412c3e05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f70108ffe98c7c6429766819a9e971bb9a0b1726b31a2859cc9be8412c3e05/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f70108ffe98c7c6429766819a9e971bb9a0b1726b31a2859cc9be8412c3e05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:19 compute-0 podman[77509]: 2026-01-31 07:57:19.658643641 +0000 UTC m=+0.578646085 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 07:57:19 compute-0 podman[77483]: 2026-01-31 07:57:19.958160166 +0000 UTC m=+0.939212179 container init 67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41 (image=quay.io/ceph/ceph:v20, name=hopeful_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:19 compute-0 podman[77483]: 2026-01-31 07:57:19.965579948 +0000 UTC m=+0.946631941 container start 67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41 (image=quay.io/ceph/ceph:v20, name=hopeful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 07:57:20 compute-0 podman[77483]: 2026-01-31 07:57:20.186370891 +0000 UTC m=+1.167422894 container attach 67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41 (image=quay.io/ceph/ceph:v20, name=hopeful_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:20 compute-0 systemd[1]: libpod-conmon-271d13e979c5a7860e1661a402ce7d6dc0a43589f3afe7bc6cecdd026e229b22.scope: Deactivated successfully.
Jan 31 07:57:20 compute-0 podman[77509]: 2026-01-31 07:57:20.271543397 +0000 UTC m=+1.191545640 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 31 07:57:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:20 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1339756765' entity='client.admin' 
Jan 31 07:57:20 compute-0 systemd[1]: libpod-67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41.scope: Deactivated successfully.
Jan 31 07:57:20 compute-0 podman[77483]: 2026-01-31 07:57:20.861482527 +0000 UTC m=+1.842534520 container died 67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41 (image=quay.io/ceph/ceph:v20, name=hopeful_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2f70108ffe98c7c6429766819a9e971bb9a0b1726b31a2859cc9be8412c3e05-merged.mount: Deactivated successfully.
Jan 31 07:57:21 compute-0 podman[77483]: 2026-01-31 07:57:21.55811992 +0000 UTC m=+2.539171913 container remove 67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41 (image=quay.io/ceph/ceph:v20, name=hopeful_cori, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:21 compute-0 systemd[1]: libpod-conmon-67ba9401961961adab227786ac113d1dccf9e9fe07015eebb308b1c3c3b49d41.scope: Deactivated successfully.
Jan 31 07:57:21 compute-0 podman[77598]: 2026-01-31 07:57:21.589685058 +0000 UTC m=+0.017683592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:21 compute-0 podman[77598]: 2026-01-31 07:57:21.794121588 +0000 UTC m=+0.222120142 container create 1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0 (image=quay.io/ceph/ceph:v20, name=epic_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:22 compute-0 systemd[1]: Started libpod-conmon-1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0.scope.
Jan 31 07:57:22 compute-0 ceph-mon[75294]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:22 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1339756765' entity='client.admin' 
Jan 31 07:57:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8491f47b24f1a65c2ff456baa4871a1b0b8982eaa268aafe62232c1d582a5c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8491f47b24f1a65c2ff456baa4871a1b0b8982eaa268aafe62232c1d582a5c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8491f47b24f1a65c2ff456baa4871a1b0b8982eaa268aafe62232c1d582a5c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:22 compute-0 podman[77598]: 2026-01-31 07:57:22.497256486 +0000 UTC m=+0.925255100 container init 1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0 (image=quay.io/ceph/ceph:v20, name=epic_clarke, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 07:57:22 compute-0 podman[77598]: 2026-01-31 07:57:22.506743844 +0000 UTC m=+0.934742388 container start 1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0 (image=quay.io/ceph/ceph:v20, name=epic_clarke, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:22 compute-0 podman[77598]: 2026-01-31 07:57:22.671714769 +0000 UTC m=+1.099713333 container attach 1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0 (image=quay.io/ceph/ceph:v20, name=epic_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 07:57:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:22 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:22 compute-0 sudo[77416]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:22 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 31 07:57:23 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:23 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:23 compute-0 systemd[1]: libpod-1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0.scope: Deactivated successfully.
Jan 31 07:57:23 compute-0 podman[77598]: 2026-01-31 07:57:23.464514426 +0000 UTC m=+1.892512950 container died 1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0 (image=quay.io/ceph/ceph:v20, name=epic_clarke, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 07:57:23 compute-0 sudo[77675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:23 compute-0 sudo[77675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:23 compute-0 sudo[77675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:23 compute-0 ceph-mon[75294]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:23 compute-0 ceph-mon[75294]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:23 compute-0 sudo[77712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 07:57:23 compute-0 sudo[77712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8491f47b24f1a65c2ff456baa4871a1b0b8982eaa268aafe62232c1d582a5c1-merged.mount: Deactivated successfully.
Jan 31 07:57:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:23 compute-0 podman[77598]: 2026-01-31 07:57:23.826726065 +0000 UTC m=+2.254724579 container remove 1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0 (image=quay.io/ceph/ceph:v20, name=epic_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:23 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77758 (sysctl)
Jan 31 07:57:23 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 07:57:23 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 07:57:23 compute-0 podman[77749]: 2026-01-31 07:57:23.941578487 +0000 UTC m=+0.099817995 container create 03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a (image=quay.io/ceph/ceph:v20, name=happy_engelbart, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:57:23 compute-0 podman[77749]: 2026-01-31 07:57:23.865014756 +0000 UTC m=+0.023254324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:24 compute-0 systemd[1]: Started libpod-conmon-03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a.scope.
Jan 31 07:57:24 compute-0 systemd[1]: libpod-conmon-1d795727137a61e7e1e741421cae617b674c183adf11a04cb5abdefbe2e7fce0.scope: Deactivated successfully.
Jan 31 07:57:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b7589faaf51cda6b306ff302f45e93e02b79eb191f94f041e580351ccc5aef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b7589faaf51cda6b306ff302f45e93e02b79eb191f94f041e580351ccc5aef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b7589faaf51cda6b306ff302f45e93e02b79eb191f94f041e580351ccc5aef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 podman[77749]: 2026-01-31 07:57:24.105867205 +0000 UTC m=+0.264106753 container init 03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a (image=quay.io/ceph/ceph:v20, name=happy_engelbart, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:24 compute-0 podman[77749]: 2026-01-31 07:57:24.113317247 +0000 UTC m=+0.271556795 container start 03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a (image=quay.io/ceph/ceph:v20, name=happy_engelbart, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:24 compute-0 sudo[77712]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:24 compute-0 podman[77749]: 2026-01-31 07:57:24.145385029 +0000 UTC m=+0.303624547 container attach 03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a (image=quay.io/ceph/ceph:v20, name=happy_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 07:57:24 compute-0 sudo[77793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:24 compute-0 sudo[77793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:24 compute-0 sudo[77793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:24 compute-0 sudo[77837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 31 07:57:24 compute-0 sudo[77837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 07:57:24 compute-0 sudo[77837]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 07:57:24 compute-0 happy_engelbart[77773]: Added label _admin to host compute-0
Jan 31 07:57:24 compute-0 systemd[1]: libpod-03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a.scope: Deactivated successfully.
Jan 31 07:57:24 compute-0 podman[77749]: 2026-01-31 07:57:24.571886166 +0000 UTC m=+0.730125674 container died 03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a (image=quay.io/ceph/ceph:v20, name=happy_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 07:57:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:24 compute-0 sudo[77892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:24 compute-0 sudo[77892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:24 compute-0 sudo[77892]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:24 compute-0 sudo[77917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:24 compute-0 sudo[77917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2b7589faaf51cda6b306ff302f45e93e02b79eb191f94f041e580351ccc5aef-merged.mount: Deactivated successfully.
Jan 31 07:57:25 compute-0 podman[77749]: 2026-01-31 07:57:25.124356638 +0000 UTC m=+1.282596146 container remove 03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a (image=quay.io/ceph/ceph:v20, name=happy_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:25 compute-0 systemd[1]: libpod-conmon-03b72fae42761d8028553b86fe3f2ee31a149260201f1bf25a17c1c1ff8fec7a.scope: Deactivated successfully.
Jan 31 07:57:25 compute-0 podman[77958]: 2026-01-31 07:57:25.175357265 +0000 UTC m=+0.030659725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:25 compute-0 podman[77970]: 2026-01-31 07:57:25.193992331 +0000 UTC m=+0.020461047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:57:25 compute-0 podman[77958]: 2026-01-31 07:57:25.290939978 +0000 UTC m=+0.146242388 container create 694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b (image=quay.io/ceph/ceph:v20, name=cranky_jackson, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 07:57:25 compute-0 systemd[1]: Started libpod-conmon-694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b.scope.
Jan 31 07:57:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffef2eb6f7e618f5f8ea3d3bc73981f146a729f0d6ab93a8c4457ec9c155a0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffef2eb6f7e618f5f8ea3d3bc73981f146a729f0d6ab93a8c4457ec9c155a0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffef2eb6f7e618f5f8ea3d3bc73981f146a729f0d6ab93a8c4457ec9c155a0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:25 compute-0 podman[77970]: 2026-01-31 07:57:25.405459871 +0000 UTC m=+0.231928487 container create 6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gagarin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:57:25 compute-0 systemd[1]: Started libpod-conmon-6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43.scope.
Jan 31 07:57:25 compute-0 podman[77958]: 2026-01-31 07:57:25.490415952 +0000 UTC m=+0.345718332 container init 694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b (image=quay.io/ceph/ceph:v20, name=cranky_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:25 compute-0 podman[77958]: 2026-01-31 07:57:25.494878923 +0000 UTC m=+0.350181293 container start 694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b (image=quay.io/ceph/ceph:v20, name=cranky_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:57:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:25 compute-0 podman[77958]: 2026-01-31 07:57:25.563464028 +0000 UTC m=+0.418766408 container attach 694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b (image=quay.io/ceph/ceph:v20, name=cranky_jackson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:57:25 compute-0 ceph-mon[75294]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:25 compute-0 ceph-mon[75294]: Added label _admin to host compute-0
Jan 31 07:57:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:25 compute-0 ceph-mon[75294]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:25 compute-0 podman[77970]: 2026-01-31 07:57:25.645783936 +0000 UTC m=+0.472252642 container init 6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:25 compute-0 podman[77970]: 2026-01-31 07:57:25.653510266 +0000 UTC m=+0.479978872 container start 6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gagarin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 07:57:25 compute-0 relaxed_gagarin[77995]: 167 167
Jan 31 07:57:25 compute-0 systemd[1]: libpod-6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43.scope: Deactivated successfully.
Jan 31 07:57:25 compute-0 podman[77970]: 2026-01-31 07:57:25.737096569 +0000 UTC m=+0.563565205 container attach 6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gagarin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:57:25 compute-0 podman[77970]: 2026-01-31 07:57:25.737574711 +0000 UTC m=+0.564043337 container died 6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2fc31c3db412ece71604174c41c2ec02eb0c317dacbad777b7d56e5a8fc3130-merged.mount: Deactivated successfully.
Jan 31 07:57:26 compute-0 podman[77970]: 2026-01-31 07:57:26.061820118 +0000 UTC m=+0.888288734 container remove 6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:26 compute-0 systemd[1]: libpod-conmon-6d1877f69272607de9bea0e377f6ee9b4c153670e1d6f32a958c55008ccaeb43.scope: Deactivated successfully.
Jan 31 07:57:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 31 07:57:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2343812934' entity='client.admin' 
Jan 31 07:57:26 compute-0 cranky_jackson[77986]: set mgr/dashboard/cluster/status
Jan 31 07:57:26 compute-0 systemd[1]: libpod-694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b.scope: Deactivated successfully.
Jan 31 07:57:26 compute-0 podman[77958]: 2026-01-31 07:57:26.17335566 +0000 UTC m=+1.028658030 container died 694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b (image=quay.io/ceph/ceph:v20, name=cranky_jackson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ffef2eb6f7e618f5f8ea3d3bc73981f146a729f0d6ab93a8c4457ec9c155a0b-merged.mount: Deactivated successfully.
Jan 31 07:57:26 compute-0 podman[77958]: 2026-01-31 07:57:26.287897735 +0000 UTC m=+1.143200105 container remove 694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b (image=quay.io/ceph/ceph:v20, name=cranky_jackson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:26 compute-0 systemd[1]: libpod-conmon-694f8b8f9c99aa985acaaf096a47e97ab5d3cb62d8cd1775f04d1a21a234a08b.scope: Deactivated successfully.
Jan 31 07:57:26 compute-0 systemd[1]: Reloading.
Jan 31 07:57:26 compute-0 systemd-rc-local-generator[78075]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:57:26 compute-0 systemd-sysv-generator[78083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:57:26 compute-0 sudo[74240]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:26 compute-0 podman[78096]: 2026-01-31 07:57:26.707595857 +0000 UTC m=+0.072398890 container create 9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:26 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:26 compute-0 podman[78096]: 2026-01-31 07:57:26.65950096 +0000 UTC m=+0.024304033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:57:26 compute-0 systemd[1]: Started libpod-conmon-9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3.scope.
Jan 31 07:57:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa9bf14f707395ef5327fbaabd99b6e865ea5f1f5680f609575c6823facb3a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa9bf14f707395ef5327fbaabd99b6e865ea5f1f5680f609575c6823facb3a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa9bf14f707395ef5327fbaabd99b6e865ea5f1f5680f609575c6823facb3a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa9bf14f707395ef5327fbaabd99b6e865ea5f1f5680f609575c6823facb3a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 podman[78096]: 2026-01-31 07:57:26.868241525 +0000 UTC m=+0.233044578 container init 9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:26 compute-0 podman[78096]: 2026-01-31 07:57:26.8735733 +0000 UTC m=+0.238376333 container start 9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:57:26 compute-0 sudo[78140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekfgwxhvowszasijiyqrxjyaaloidaxb ; /usr/bin/python3'
Jan 31 07:57:26 compute-0 sudo[78140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:26 compute-0 podman[78096]: 2026-01-31 07:57:26.89380897 +0000 UTC m=+0.258612023 container attach 9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 07:57:27 compute-0 python3[78143]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:27 compute-0 podman[78144]: 2026-01-31 07:57:27.093347855 +0000 UTC m=+0.019249743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:27 compute-0 podman[78144]: 2026-01-31 07:57:27.240129397 +0000 UTC m=+0.166031295 container create a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547 (image=quay.io/ceph/ceph:v20, name=recursing_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 07:57:27 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2343812934' entity='client.admin' 
Jan 31 07:57:27 compute-0 ceph-mon[75294]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:27 compute-0 systemd[1]: Started libpod-conmon-a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547.scope.
Jan 31 07:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2cba82bb4c89d499d5c5ca1b3f936cbcd20a56db0189eb6cc515841df42461/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2cba82bb4c89d499d5c5ca1b3f936cbcd20a56db0189eb6cc515841df42461/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:27 compute-0 agitated_gates[78113]: [
Jan 31 07:57:27 compute-0 agitated_gates[78113]:     {
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "available": false,
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "being_replaced": false,
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "ceph_device_lvm": false,
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "lsm_data": {},
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "lvs": [],
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "path": "/dev/sr0",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "rejected_reasons": [
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "Insufficient space (<5GB)",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "Has a FileSystem"
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         ],
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         "sys_api": {
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "actuators": null,
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "device_nodes": [
Jan 31 07:57:27 compute-0 agitated_gates[78113]:                 "sr0"
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             ],
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "devname": "sr0",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "human_readable_size": "482.00 KB",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "id_bus": "ata",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "model": "QEMU DVD-ROM",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "nr_requests": "2",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "parent": "/dev/sr0",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "partitions": {},
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "path": "/dev/sr0",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "removable": "1",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "rev": "2.5+",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "ro": "0",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "rotational": "1",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "sas_address": "",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "sas_device_handle": "",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "scheduler_mode": "mq-deadline",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "sectors": 0,
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "sectorsize": "2048",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "size": 493568.0,
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "support_discard": "2048",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "type": "disk",
Jan 31 07:57:27 compute-0 agitated_gates[78113]:             "vendor": "QEMU"
Jan 31 07:57:27 compute-0 agitated_gates[78113]:         }
Jan 31 07:57:27 compute-0 agitated_gates[78113]:     }
Jan 31 07:57:27 compute-0 agitated_gates[78113]: ]
Jan 31 07:57:27 compute-0 systemd[1]: libpod-9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3.scope: Deactivated successfully.
Jan 31 07:57:27 compute-0 podman[78144]: 2026-01-31 07:57:27.53447933 +0000 UTC m=+0.460381238 container init a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547 (image=quay.io/ceph/ceph:v20, name=recursing_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:27 compute-0 podman[78144]: 2026-01-31 07:57:27.539854446 +0000 UTC m=+0.465756314 container start a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547 (image=quay.io/ceph/ceph:v20, name=recursing_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:57:27 compute-0 podman[78144]: 2026-01-31 07:57:27.661203296 +0000 UTC m=+0.587105264 container attach a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547 (image=quay.io/ceph/ceph:v20, name=recursing_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:57:27 compute-0 podman[78096]: 2026-01-31 07:57:27.711626637 +0000 UTC m=+1.076429690 container died 9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 31 07:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aa9bf14f707395ef5327fbaabd99b6e865ea5f1f5680f609575c6823facb3a0-merged.mount: Deactivated successfully.
Jan 31 07:57:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2723955518' entity='client.admin' 
Jan 31 07:57:28 compute-0 systemd[1]: libpod-a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547.scope: Deactivated successfully.
Jan 31 07:57:28 compute-0 podman[78778]: 2026-01-31 07:57:28.512481003 +0000 UTC m=+1.120091587 container remove 9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:28 compute-0 systemd[1]: libpod-conmon-9e0e19cd4680cae4d06c389d1d1fd0c809a7c7bc8f11e623b30eff4382d680d3.scope: Deactivated successfully.
Jan 31 07:57:28 compute-0 sudo[77917]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:28 compute-0 podman[78144]: 2026-01-31 07:57:28.568132626 +0000 UTC m=+1.494034494 container died a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547 (image=quay.io/ceph/ceph:v20, name=recursing_yalow, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:28 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c2cba82bb4c89d499d5c5ca1b3f936cbcd20a56db0189eb6cc515841df42461-merged.mount: Deactivated successfully.
Jan 31 07:57:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:29 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2723955518' entity='client.admin' 
Jan 31 07:57:29 compute-0 ceph-mon[75294]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 07:57:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 07:57:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:29 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 07:57:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:57:29 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:57:29 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:57:29 compute-0 sudo[78826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:57:29 compute-0 sudo[78826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:29 compute-0 podman[78144]: 2026-01-31 07:57:29.930065557 +0000 UTC m=+2.855967425 container remove a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547 (image=quay.io/ceph/ceph:v20, name=recursing_yalow, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:29 compute-0 sudo[78826]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:29 compute-0 sudo[78140]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:29 compute-0 sudo[78851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph
Jan 31 07:57:29 compute-0 sudo[78851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:29 compute-0 sudo[78851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 systemd[1]: libpod-conmon-a488c74357dbcda73d354151223eb697612d33b6a2812a76efa577a3824aa547.scope: Deactivated successfully.
Jan 31 07:57:30 compute-0 sudo[78876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[78876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[78876]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[78901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:30 compute-0 sudo[78901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[78901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[78926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[78926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[78926]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[78974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[78974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[78974]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[79023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 07:57:30 compute-0 sudo[79076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79076]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf
Jan 31 07:57:30 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf
Jan 31 07:57:30 compute-0 sudo[79124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config
Jan 31 07:57:30 compute-0 sudo[79124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79124]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config
Jan 31 07:57:30 compute-0 sudo[79149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79149]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[79174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79174]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:30 compute-0 sudo[79222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79222]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[79265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 07:57:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:57:30 compute-0 ceph-mon[75294]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:57:30 compute-0 sudo[79321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpuyfspljrvcvmnkdqvkpkvfqstjnybh ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846250.2264278-36787-232829905777114/async_wrapper.py j250827779516 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846250.2264278-36787-232829905777114/AnsiballZ_command.py _'
Jan 31 07:57:30 compute-0 sudo[79321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:30 compute-0 sudo[79347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[79347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79347]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:30 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:30 compute-0 sudo[79372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf.new
Jan 31 07:57:30 compute-0 sudo[79372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79372]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf.new /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf
Jan 31 07:57:30 compute-0 sudo[79397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79397]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:57:30 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:57:30 compute-0 sudo[79422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:57:30 compute-0 sudo[79422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79422]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 ansible-async_wrapper.py[79343]: Invoked with j250827779516 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846250.2264278-36787-232829905777114/AnsiballZ_command.py _
Jan 31 07:57:30 compute-0 ansible-async_wrapper.py[79450]: Starting module and watcher
Jan 31 07:57:30 compute-0 ansible-async_wrapper.py[79450]: Start watching 79453 (30)
Jan 31 07:57:30 compute-0 ansible-async_wrapper.py[79453]: Start module (79453)
Jan 31 07:57:30 compute-0 ansible-async_wrapper.py[79343]: Return async_wrapper task started.
Jan 31 07:57:30 compute-0 sudo[79321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph
Jan 31 07:57:30 compute-0 sudo[79448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79448]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:57:30 compute-0 sudo[79477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79477]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:30 compute-0 sudo[79502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:30 compute-0 sudo[79502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:30 compute-0 sudo[79502]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 sudo[79527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:57:31 compute-0 python3[79457]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:31 compute-0 sudo[79527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79527]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 sudo[79589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:57:31 compute-0 sudo[79589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79589]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 podman[79551]: 2026-01-31 07:57:31.07542768 +0000 UTC m=+0.025893744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:31 compute-0 podman[79551]: 2026-01-31 07:57:31.180803946 +0000 UTC m=+0.131270020 container create 73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94 (image=quay.io/ceph/ceph:v20, name=vigorous_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 07:57:31 compute-0 sudo[79614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:57:31 compute-0 sudo[79614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79614]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 sudo[79639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 31 07:57:31 compute-0 sudo[79639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79639]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring
Jan 31 07:57:31 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring
Jan 31 07:57:31 compute-0 sudo[79664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config
Jan 31 07:57:31 compute-0 sudo[79664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79664]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 systemd[1]: Started libpod-conmon-73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94.scope.
Jan 31 07:57:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:31 compute-0 sudo[79689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config
Jan 31 07:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00923ad64e076f0e49be724c3acd6b81d458e4bec9dd68e15099a0361f2941d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00923ad64e076f0e49be724c3acd6b81d458e4bec9dd68e15099a0361f2941d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:31 compute-0 sudo[79689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 podman[79551]: 2026-01-31 07:57:31.42990366 +0000 UTC m=+0.380369704 container init 73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94 (image=quay.io/ceph/ceph:v20, name=vigorous_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:31 compute-0 podman[79551]: 2026-01-31 07:57:31.435725278 +0000 UTC m=+0.386191322 container start 73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94 (image=quay.io/ceph/ceph:v20, name=vigorous_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:57:31 compute-0 sudo[79719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring.new
Jan 31 07:57:31 compute-0 sudo[79719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79719]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 sudo[79745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:31 compute-0 sudo[79745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 podman[79551]: 2026-01-31 07:57:31.520544773 +0000 UTC m=+0.471010857 container attach 73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94 (image=quay.io/ceph/ceph:v20, name=vigorous_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 31 07:57:31 compute-0 sudo[79770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring.new
Jan 31 07:57:31 compute-0 sudo[79770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79770]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 sudo[79837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring.new
Jan 31 07:57:31 compute-0 sudo[79837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79837]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 sudo[79862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring.new
Jan 31 07:57:31 compute-0 sudo[79862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79862]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 sudo[79887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dc03f344-536f-5591-add9-31059f42637c/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring.new /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring
Jan 31 07:57:31 compute-0 sudo[79887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:31 compute-0 sudo[79887]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:31 compute-0 ceph-mon[75294]: Updating compute-0:/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.conf
Jan 31 07:57:31 compute-0 ceph-mon[75294]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:31 compute-0 ceph-mon[75294]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:57:31 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:57:31 compute-0 vigorous_lamport[79714]: 
Jan 31 07:57:31 compute-0 vigorous_lamport[79714]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:57:31 compute-0 systemd[1]: libpod-73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94.scope: Deactivated successfully.
Jan 31 07:57:31 compute-0 podman[79551]: 2026-01-31 07:57:31.883450122 +0000 UTC m=+0.833916156 container died 73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94 (image=quay.io/ceph/ceph:v20, name=vigorous_lamport, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:32 compute-0 sudo[79972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjnejbqswpsewitzjtduujeexhhxsmc ; /usr/bin/python3'
Jan 31 07:57:32 compute-0 sudo[79972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 07:57:32 compute-0 python3[79975]: ansible-ansible.legacy.async_status Invoked with jid=j250827779516.79343 mode=status _async_dir=/root/.ansible_async
Jan 31 07:57:32 compute-0 sudo[79972]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-00923ad64e076f0e49be724c3acd6b81d458e4bec9dd68e15099a0361f2941d7-merged.mount: Deactivated successfully.
Jan 31 07:57:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:32 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev d2c47518-7710-4cb1-a960-300f38d464ff (Updating crash deployment (+1 -> 1))
Jan 31 07:57:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 31 07:57:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 07:57:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:57:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:32 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 07:57:32 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 07:57:32 compute-0 sudo[79976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:32 compute-0 sudo[79976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:32 compute-0 sudo[79976]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:32 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:32 compute-0 sudo[80001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:32 compute-0 sudo[80001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:32 compute-0 podman[79551]: 2026-01-31 07:57:32.849424237 +0000 UTC m=+1.799890271 container remove 73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94 (image=quay.io/ceph/ceph:v20, name=vigorous_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:32 compute-0 ceph-mon[75294]: Updating compute-0:/var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/config/ceph.client.admin.keyring
Jan 31 07:57:32 compute-0 ceph-mon[75294]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:57:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 07:57:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:57:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:32 compute-0 ansible-async_wrapper.py[79453]: Module complete (79453)
Jan 31 07:57:32 compute-0 systemd[1]: libpod-conmon-73a5d2090a45b8afaae8deb6de7358f4de7278f58f1ea29a651f3ec51994de94.scope: Deactivated successfully.
Jan 31 07:57:33 compute-0 podman[80066]: 2026-01-31 07:57:33.139365691 +0000 UTC m=+0.023486650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:57:33 compute-0 podman[80066]: 2026-01-31 07:57:33.292844044 +0000 UTC m=+0.176964953 container create ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:33 compute-0 systemd[1]: Started libpod-conmon-ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672.scope.
Jan 31 07:57:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:33 compute-0 sudo[80131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbzmrmuvblcrsuhcsrrujgozhtllncg ; /usr/bin/python3'
Jan 31 07:57:33 compute-0 sudo[80131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:33 compute-0 python3[80133]: ansible-ansible.legacy.async_status Invoked with jid=j250827779516.79343 mode=status _async_dir=/root/.ansible_async
Jan 31 07:57:33 compute-0 sudo[80131]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:33 compute-0 sudo[80180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tprzuvwyjqkebrojxtwgupwezydluglj ; /usr/bin/python3'
Jan 31 07:57:33 compute-0 sudo[80180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:34 compute-0 podman[80066]: 2026-01-31 07:57:34.067262751 +0000 UTC m=+0.951383700 container init ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 31 07:57:34 compute-0 podman[80066]: 2026-01-31 07:57:34.077449247 +0000 UTC m=+0.961570176 container start ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 07:57:34 compute-0 infallible_archimedes[80082]: 167 167
Jan 31 07:57:34 compute-0 systemd[1]: libpod-ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672.scope: Deactivated successfully.
Jan 31 07:57:34 compute-0 python3[80182]: ansible-ansible.legacy.async_status Invoked with jid=j250827779516.79343 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 07:57:34 compute-0 sudo[80180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:34 compute-0 ceph-mon[75294]: Deploying daemon crash.compute-0 on compute-0
Jan 31 07:57:34 compute-0 ceph-mon[75294]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:34 compute-0 podman[80066]: 2026-01-31 07:57:34.306622999 +0000 UTC m=+1.190743938 container attach ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:34 compute-0 podman[80066]: 2026-01-31 07:57:34.307275728 +0000 UTC m=+1.191396637 container died ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_archimedes, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:34 compute-0 sudo[80219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhoskxkbuwicjncrfyryiwhoaayeluvb ; /usr/bin/python3'
Jan 31 07:57:34 compute-0 sudo[80219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:34 compute-0 python3[80221]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:57:34 compute-0 sudo[80219]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:34 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5664ea96d0501686dbe593484490b172d088801fb6f3fd4b2871c760250b0605-merged.mount: Deactivated successfully.
Jan 31 07:57:35 compute-0 sudo[80248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igbrjtbuhmevnuukoclaoairososdosa ; /usr/bin/python3'
Jan 31 07:57:35 compute-0 sudo[80248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:35 compute-0 python3[80250]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:35 compute-0 podman[80066]: 2026-01-31 07:57:35.257945967 +0000 UTC m=+2.142066906 container remove ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:35 compute-0 ceph-mon[75294]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:35 compute-0 podman[80251]: 2026-01-31 07:57:35.273025836 +0000 UTC m=+0.033032978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:35 compute-0 podman[80251]: 2026-01-31 07:57:35.449443594 +0000 UTC m=+0.209450646 container create 7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185 (image=quay.io/ceph/ceph:v20, name=stoic_sammet, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:57:35 compute-0 systemd[1]: Started libpod-conmon-7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185.scope.
Jan 31 07:57:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:35 compute-0 systemd[1]: libpod-conmon-ee8a377b7e9d4fee41d9eb023788b606649b146193ed8a59870b5bf6e9340672.scope: Deactivated successfully.
Jan 31 07:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08731f6d05ccbb5b55985c4c040a4e7a163b49d05f4c332b2f4d3ca3056dee96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08731f6d05ccbb5b55985c4c040a4e7a163b49d05f4c332b2f4d3ca3056dee96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08731f6d05ccbb5b55985c4c040a4e7a163b49d05f4c332b2f4d3ca3056dee96/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:35 compute-0 systemd[1]: Reloading.
Jan 31 07:57:35 compute-0 podman[80251]: 2026-01-31 07:57:35.65559873 +0000 UTC m=+0.415605822 container init 7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185 (image=quay.io/ceph/ceph:v20, name=stoic_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:57:35 compute-0 podman[80251]: 2026-01-31 07:57:35.666554477 +0000 UTC m=+0.426561539 container start 7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185 (image=quay.io/ceph/ceph:v20, name=stoic_sammet, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 07:57:35 compute-0 systemd-rc-local-generator[80295]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:57:35 compute-0 systemd-sysv-generator[80301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:57:35 compute-0 podman[80251]: 2026-01-31 07:57:35.738268607 +0000 UTC m=+0.498275649 container attach 7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185 (image=quay.io/ceph/ceph:v20, name=stoic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:57:35 compute-0 ansible-async_wrapper.py[79450]: Done in kid B.
Jan 31 07:57:35 compute-0 systemd[1]: Reloading.
Jan 31 07:57:35 compute-0 systemd-rc-local-generator[80352]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:57:35 compute-0 systemd-sysv-generator[80357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:57:36 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:57:36 compute-0 stoic_sammet[80266]: 
Jan 31 07:57:36 compute-0 stoic_sammet[80266]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:57:36 compute-0 podman[80368]: 2026-01-31 07:57:36.166009178 +0000 UTC m=+0.023598863 container died 7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185 (image=quay.io/ceph/ceph:v20, name=stoic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:57:36 compute-0 systemd[1]: libpod-7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185.scope: Deactivated successfully.
Jan 31 07:57:36 compute-0 systemd[1]: Starting Ceph crash.compute-0 for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-08731f6d05ccbb5b55985c4c040a4e7a163b49d05f4c332b2f4d3ca3056dee96-merged.mount: Deactivated successfully.
Jan 31 07:57:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:36 compute-0 podman[80368]: 2026-01-31 07:57:36.744807646 +0000 UTC m=+0.602397311 container remove 7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185 (image=quay.io/ceph/ceph:v20, name=stoic_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 07:57:36 compute-0 systemd[1]: libpod-conmon-7aa06c41899d6201e397683aafeea1d0acf9e906ab41681757769f5121a67185.scope: Deactivated successfully.
Jan 31 07:57:36 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:36 compute-0 sudo[80248]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:37 compute-0 podman[80432]: 2026-01-31 07:57:36.939795878 +0000 UTC m=+0.021290710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:57:37 compute-0 podman[80432]: 2026-01-31 07:57:37.111735413 +0000 UTC m=+0.193230225 container create 1e3014b21f61c1cff202c040894301dd8f10ce760b920d1cfd1f50a5c91f63c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:57:37 compute-0 sudo[80468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbxnevloebfkwtucxpaadarsrqnpzfom ; /usr/bin/python3'
Jan 31 07:57:37 compute-0 sudo[80468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9325b62d078bda3bed138391d8ad13a445e027de4ad69e4718ee04881c84caf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9325b62d078bda3bed138391d8ad13a445e027de4ad69e4718ee04881c84caf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9325b62d078bda3bed138391d8ad13a445e027de4ad69e4718ee04881c84caf/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9325b62d078bda3bed138391d8ad13a445e027de4ad69e4718ee04881c84caf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:37 compute-0 python3[80470]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:37 compute-0 podman[80432]: 2026-01-31 07:57:37.401117181 +0000 UTC m=+0.482612083 container init 1e3014b21f61c1cff202c040894301dd8f10ce760b920d1cfd1f50a5c91f63c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 07:57:37 compute-0 podman[80432]: 2026-01-31 07:57:37.407878645 +0000 UTC m=+0.489373497 container start 1e3014b21f61c1cff202c040894301dd8f10ce760b920d1cfd1f50a5c91f63c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: 2026-01-31T07:57:37.548+0000 7f4583e35640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: 2026-01-31T07:57:37.548+0000 7f4583e35640 -1 AuthRegistry(0x7f457c052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: 2026-01-31T07:57:37.549+0000 7f4583e35640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: 2026-01-31T07:57:37.549+0000 7f4583e35640 -1 AuthRegistry(0x7f4583e33fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: 2026-01-31T07:57:37.550+0000 7f4581baa640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: 2026-01-31T07:57:37.550+0000 7f4583e35640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 07:57:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-crash-compute-0[80473]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 07:57:37 compute-0 bash[80432]: 1e3014b21f61c1cff202c040894301dd8f10ce760b920d1cfd1f50a5c91f63c7
Jan 31 07:57:37 compute-0 systemd[1]: Started Ceph crash.compute-0 for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:57:37 compute-0 podman[80476]: 2026-01-31 07:57:37.661433329 +0000 UTC m=+0.358420337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:37 compute-0 podman[80476]: 2026-01-31 07:57:37.825775038 +0000 UTC m=+0.522762006 container create de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e (image=quay.io/ceph/ceph:v20, name=priceless_hypatia, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:37 compute-0 sudo[80001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:38 compute-0 systemd[1]: Started libpod-conmon-de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e.scope.
Jan 31 07:57:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653f16a7be541eff0bbebf03530d8876611a0af051ee1abedc3481d7c4063ba8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653f16a7be541eff0bbebf03530d8876611a0af051ee1abedc3481d7c4063ba8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653f16a7be541eff0bbebf03530d8876611a0af051ee1abedc3481d7c4063ba8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:38 compute-0 ceph-mon[75294]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:57:38 compute-0 ceph-mon[75294]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:38 compute-0 podman[80476]: 2026-01-31 07:57:38.386858984 +0000 UTC m=+1.083845942 container init de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e (image=quay.io/ceph/ceph:v20, name=priceless_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:57:38 compute-0 podman[80476]: 2026-01-31 07:57:38.393552756 +0000 UTC m=+1.090539714 container start de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e (image=quay.io/ceph/ceph:v20, name=priceless_hypatia, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:38 compute-0 podman[80476]: 2026-01-31 07:57:38.551695496 +0000 UTC m=+1.248682514 container attach de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e (image=quay.io/ceph/ceph:v20, name=priceless_hypatia, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:38 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 07:57:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:38 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:38 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:38 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev d2c47518-7710-4cb1-a960-300f38d464ff (Updating crash deployment (+1 -> 1))
Jan 31 07:57:38 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event d2c47518-7710-4cb1-a960-300f38d464ff (Updating crash deployment (+1 -> 1)) in 6 seconds
Jan 31 07:57:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 07:57:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1087994903' entity='client.admin' 
Jan 31 07:57:39 compute-0 systemd[1]: libpod-de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e.scope: Deactivated successfully.
Jan 31 07:57:39 compute-0 podman[80476]: 2026-01-31 07:57:39.220273826 +0000 UTC m=+1.917260784 container died de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e (image=quay.io/ceph/ceph:v20, name=priceless_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:57:39 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:39 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:39 compute-0 ceph-mon[75294]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:39 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:39 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:39 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev c04166cb-d642-4b19-987f-2237dc9ed438 (Updating mgr deployment (+1 -> 2))
Jan 31 07:57:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mefzwz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mefzwz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 07:57:39 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 1 completed events
Jan 31 07:57:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mefzwz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 07:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-653f16a7be541eff0bbebf03530d8876611a0af051ee1abedc3481d7c4063ba8-merged.mount: Deactivated successfully.
Jan 31 07:57:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr services"} : dispatch
Jan 31 07:57:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:39 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.mefzwz on compute-0
Jan 31 07:57:39 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.mefzwz on compute-0
Jan 31 07:57:39 compute-0 sudo[80541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:39 compute-0 sudo[80541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:39 compute-0 sudo[80541]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:40 compute-0 sudo[80566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:40 compute-0 sudo[80566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:40 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1087994903' entity='client.admin' 
Jan 31 07:57:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mefzwz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 07:57:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mefzwz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 07:57:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr services"} : dispatch
Jan 31 07:57:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:40 compute-0 ceph-mon[75294]: Deploying daemon mgr.compute-0.mefzwz on compute-0
Jan 31 07:57:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:40 compute-0 podman[80476]: 2026-01-31 07:57:40.882121571 +0000 UTC m=+3.579108579 container remove de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e (image=quay.io/ceph/ceph:v20, name=priceless_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:40 compute-0 sudo[80468]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:40 compute-0 systemd[1]: libpod-conmon-de3c3f6fb9ebc89bebbffb4728a58631fbc047d1403fba2e75546154dc4ff06e.scope: Deactivated successfully.
Jan 31 07:57:41 compute-0 sudo[80655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfciybydqjpuxkaxkfywjusdgbvrdzsk ; /usr/bin/python3'
Jan 31 07:57:41 compute-0 sudo[80655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:41 compute-0 podman[80656]: 2026-01-31 07:57:41.13399923 +0000 UTC m=+0.033545413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:57:41 compute-0 python3[80659]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:41 compute-0 podman[80656]: 2026-01-31 07:57:41.529912336 +0000 UTC m=+0.429458469 container create 86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chaum, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:57:41 compute-0 systemd[1]: Started libpod-conmon-86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17.scope.
Jan 31 07:57:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:42 compute-0 podman[80656]: 2026-01-31 07:57:42.143461958 +0000 UTC m=+1.043008151 container init 86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:42 compute-0 podman[80656]: 2026-01-31 07:57:42.152898665 +0000 UTC m=+1.052444798 container start 86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chaum, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 07:57:42 compute-0 strange_chaum[80685]: 167 167
Jan 31 07:57:42 compute-0 systemd[1]: libpod-86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17.scope: Deactivated successfully.
Jan 31 07:57:42 compute-0 podman[80672]: 2026-01-31 07:57:42.15674515 +0000 UTC m=+0.853052526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:42 compute-0 ceph-mon[75294]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:42 compute-0 podman[80656]: 2026-01-31 07:57:42.674012694 +0000 UTC m=+1.573558827 container attach 86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chaum, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:42 compute-0 podman[80656]: 2026-01-31 07:57:42.674964191 +0000 UTC m=+1.574510314 container died 86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:42 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6971ec8467aa5424f1bfed7541a40e28876ce0809b4c5cf1fa8efb579825db28-merged.mount: Deactivated successfully.
Jan 31 07:57:43 compute-0 podman[80656]: 2026-01-31 07:57:43.674051947 +0000 UTC m=+2.573598070 container remove 86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:43 compute-0 systemd[1]: libpod-conmon-86e553608f87160c1e68b221a4ca9546ad383cfea6f1906d1682f130f9472a17.scope: Deactivated successfully.
Jan 31 07:57:43 compute-0 ceph-mon[75294]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:43 compute-0 podman[80672]: 2026-01-31 07:57:43.987672763 +0000 UTC m=+2.683980069 container create 18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1 (image=quay.io/ceph/ceph:v20, name=determined_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 07:57:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:44 compute-0 systemd[1]: Started libpod-conmon-18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1.scope.
Jan 31 07:57:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c30eac2c6e53942b01ec222eb172963830479fcd2c71148c472ae31001b34264/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c30eac2c6e53942b01ec222eb172963830479fcd2c71148c472ae31001b34264/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c30eac2c6e53942b01ec222eb172963830479fcd2c71148c472ae31001b34264/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:44 compute-0 podman[80672]: 2026-01-31 07:57:44.693547797 +0000 UTC m=+3.389855113 container init 18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1 (image=quay.io/ceph/ceph:v20, name=determined_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:44 compute-0 podman[80672]: 2026-01-31 07:57:44.701560274 +0000 UTC m=+3.397867580 container start 18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1 (image=quay.io/ceph/ceph:v20, name=determined_goldstine, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:44 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:45 compute-0 podman[80672]: 2026-01-31 07:57:45.019918871 +0000 UTC m=+3.716226207 container attach 18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1 (image=quay.io/ceph/ceph:v20, name=determined_goldstine, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:57:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 31 07:57:45 compute-0 systemd[1]: Reloading.
Jan 31 07:57:45 compute-0 systemd-sysv-generator[80754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:57:45 compute-0 systemd-rc-local-generator[80748]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:57:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1673297399' entity='client.admin' 
Jan 31 07:57:45 compute-0 podman[80672]: 2026-01-31 07:57:45.265404686 +0000 UTC m=+3.961712022 container died 18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1 (image=quay.io/ceph/ceph:v20, name=determined_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 07:57:45 compute-0 systemd[1]: libpod-18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1.scope: Deactivated successfully.
Jan 31 07:57:45 compute-0 systemd[1]: Reloading.
Jan 31 07:57:45 compute-0 systemd-rc-local-generator[80809]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:57:45 compute-0 systemd-sysv-generator[80812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:57:45 compute-0 ceph-mon[75294]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:45 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1673297399' entity='client.admin' 
Jan 31 07:57:45 compute-0 systemd[1]: Starting Ceph mgr.compute-0.mefzwz for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c30eac2c6e53942b01ec222eb172963830479fcd2c71148c472ae31001b34264-merged.mount: Deactivated successfully.
Jan 31 07:57:46 compute-0 podman[80672]: 2026-01-31 07:57:46.348169437 +0000 UTC m=+5.044476743 container remove 18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1 (image=quay.io/ceph/ceph:v20, name=determined_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:57:46 compute-0 systemd[1]: libpod-conmon-18f4383ca5c5bbbd5e7249ddf09cc2aff49b64e1e6011df541dfcfb81e6f69f1.scope: Deactivated successfully.
Jan 31 07:57:46 compute-0 sudo[80655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:46 compute-0 sudo[80896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woljaehsegzwpnuivabmpbuyvpkrzsmm ; /usr/bin/python3'
Jan 31 07:57:46 compute-0 sudo[80896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:46 compute-0 podman[80893]: 2026-01-31 07:57:46.575674743 +0000 UTC m=+0.028174247 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:57:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:46 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:47 compute-0 python3[80906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:47 compute-0 podman[80893]: 2026-01-31 07:57:47.130418117 +0000 UTC m=+0.582917621 container create 38dc5ae2c8adf343bb973e448e918cc4fadbdc675013cd9b9f8bf8ca17828017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 07:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda37ff3ea497970f64c56eb07fa801ad27940c48e4a859035d6f8b924ffe3b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda37ff3ea497970f64c56eb07fa801ad27940c48e4a859035d6f8b924ffe3b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda37ff3ea497970f64c56eb07fa801ad27940c48e4a859035d6f8b924ffe3b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda37ff3ea497970f64c56eb07fa801ad27940c48e4a859035d6f8b924ffe3b7/merged/var/lib/ceph/mgr/ceph-compute-0.mefzwz supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:47 compute-0 ceph-mon[75294]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:47 compute-0 podman[80909]: 2026-01-31 07:57:47.209003783 +0000 UTC m=+0.139503313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:47 compute-0 podman[80909]: 2026-01-31 07:57:47.517025059 +0000 UTC m=+0.447524599 container create faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c (image=quay.io/ceph/ceph:v20, name=exciting_boyd, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:47 compute-0 systemd[1]: Started libpod-conmon-faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c.scope.
Jan 31 07:57:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2c14e80546e6ce726f7374e931232fa4befa946f3eb180bc8d71c6ef03753e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2c14e80546e6ce726f7374e931232fa4befa946f3eb180bc8d71c6ef03753e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2c14e80546e6ce726f7374e931232fa4befa946f3eb180bc8d71c6ef03753e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:47 compute-0 podman[80893]: 2026-01-31 07:57:47.814184489 +0000 UTC m=+1.266683983 container init 38dc5ae2c8adf343bb973e448e918cc4fadbdc675013cd9b9f8bf8ca17828017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:57:47 compute-0 podman[80893]: 2026-01-31 07:57:47.819880335 +0000 UTC m=+1.272379829 container start 38dc5ae2c8adf343bb973e448e918cc4fadbdc675013cd9b9f8bf8ca17828017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:47 compute-0 ceph-mgr[80935]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:57:47 compute-0 ceph-mgr[80935]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 07:57:47 compute-0 ceph-mgr[80935]: pidfile_write: ignore empty --pid-file
Jan 31 07:57:47 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'alerts'
Jan 31 07:57:47 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'balancer'
Jan 31 07:57:48 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'cephadm'
Jan 31 07:57:48 compute-0 podman[80909]: 2026-01-31 07:57:48.079805571 +0000 UTC m=+1.010305091 container init faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c (image=quay.io/ceph/ceph:v20, name=exciting_boyd, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:57:48 compute-0 podman[80909]: 2026-01-31 07:57:48.085693411 +0000 UTC m=+1.016192911 container start faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c (image=quay.io/ceph/ceph:v20, name=exciting_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:48 compute-0 podman[80909]: 2026-01-31 07:57:48.188460456 +0000 UTC m=+1.118959956 container attach faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c (image=quay.io/ceph/ceph:v20, name=exciting_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:57:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 31 07:57:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4163231516' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 07:57:48 compute-0 bash[80893]: 38dc5ae2c8adf343bb973e448e918cc4fadbdc675013cd9b9f8bf8ca17828017
Jan 31 07:57:48 compute-0 systemd[1]: Started Ceph mgr.compute-0.mefzwz for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:57:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 07:57:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:57:48 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'crash'
Jan 31 07:57:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:48 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:48 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'dashboard'
Jan 31 07:57:48 compute-0 sudo[80566]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4163231516' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 07:57:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 07:57:48 compute-0 exciting_boyd[80931]: set require_min_compat_client to mimic
Jan 31 07:57:49 compute-0 systemd[1]: libpod-faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c.scope: Deactivated successfully.
Jan 31 07:57:49 compute-0 podman[80989]: 2026-01-31 07:57:49.047152164 +0000 UTC m=+0.033943574 container died faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c (image=quay.io/ceph/ceph:v20, name=exciting_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:49 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 07:57:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:49 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4163231516' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 07:57:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 07:57:49 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'devicehealth'
Jan 31 07:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc2c14e80546e6ce726f7374e931232fa4befa946f3eb180bc8d71c6ef03753e-merged.mount: Deactivated successfully.
Jan 31 07:57:49 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 07:57:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:49 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev c04166cb-d642-4b19-987f-2237dc9ed438 (Updating mgr deployment (+1 -> 2))
Jan 31 07:57:49 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event c04166cb-d642-4b19-987f-2237dc9ed438 (Updating mgr deployment (+1 -> 2)) in 10 seconds
Jan 31 07:57:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 07:57:49 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz[80922]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 07:57:49 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz[80922]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 07:57:49 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz[80922]:   from numpy import show_config as show_numpy_config
Jan 31 07:57:49 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'influx'
Jan 31 07:57:49 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'insights'
Jan 31 07:57:49 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'iostat'
Jan 31 07:57:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:49 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'k8sevents'
Jan 31 07:57:49 compute-0 sudo[81004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:57:49 compute-0 sudo[81004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:49 compute-0 sudo[81004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:49 compute-0 sudo[81029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:49 compute-0 sudo[81029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:49 compute-0 sudo[81029]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:50 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 2 completed events
Jan 31 07:57:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 07:57:50 compute-0 sudo[81054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 07:57:50 compute-0 sudo[81054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:50 compute-0 podman[80989]: 2026-01-31 07:57:50.186442582 +0000 UTC m=+1.173233912 container remove faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c (image=quay.io/ceph/ceph:v20, name=exciting_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 07:57:50 compute-0 systemd[1]: libpod-conmon-faef22e75b619e4d937494d3a0e70807afa9b092b6f6b54ee24c6e57c994db5c.scope: Deactivated successfully.
Jan 31 07:57:50 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'localpool'
Jan 31 07:57:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:50 compute-0 sudo[80896]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:50 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 07:57:50 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'mirroring'
Jan 31 07:57:50 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'nfs'
Jan 31 07:57:50 compute-0 sudo[81162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmcbevbxrjmbljhjlygqgjhvqnslhfwt ; /usr/bin/python3'
Jan 31 07:57:50 compute-0 sudo[81162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:50 compute-0 ceph-mon[75294]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:50 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4163231516' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 07:57:50 compute-0 ceph-mon[75294]: osdmap e3: 0 total, 0 up, 0 in
Jan 31 07:57:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_07:57:50
Jan 31 07:57:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:57:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 07:57:50 compute-0 ceph-mgr[75591]: [balancer INFO root] No pools available
Jan 31 07:57:50 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:50 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'orchestrator'
Jan 31 07:57:50 compute-0 python3[81164]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:51 compute-0 podman[81124]: 2026-01-31 07:57:51.001081933 +0000 UTC m=+0.655266838 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:57:51 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 07:57:51 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'osd_support'
Jan 31 07:57:51 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 07:57:51 compute-0 podman[81124]: 2026-01-31 07:57:51.218123325 +0000 UTC m=+0.872308230 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:57:51 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'progress'
Jan 31 07:57:51 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'prometheus'
Jan 31 07:57:51 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'rbd_support'
Jan 31 07:57:51 compute-0 podman[81165]: 2026-01-31 07:57:51.67069931 +0000 UTC m=+0.861554658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:51 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'rgw'
Jan 31 07:57:51 compute-0 ceph-mon[75294]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:51 compute-0 podman[81165]: 2026-01-31 07:57:51.819563088 +0000 UTC m=+1.010418426 container create 18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f (image=quay.io/ceph/ceph:v20, name=nifty_mccarthy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:57:51 compute-0 systemd[1]: Started libpod-conmon-18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f.scope.
Jan 31 07:57:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ce88b152018190bebf9a19fc8f1c9a2e325a8c2fbb82f8da584ab4484f45fb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ce88b152018190bebf9a19fc8f1c9a2e325a8c2fbb82f8da584ab4484f45fb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ce88b152018190bebf9a19fc8f1c9a2e325a8c2fbb82f8da584ab4484f45fb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:52 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'rook'
Jan 31 07:57:52 compute-0 podman[81165]: 2026-01-31 07:57:52.079249869 +0000 UTC m=+1.270105247 container init 18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f (image=quay.io/ceph/ceph:v20, name=nifty_mccarthy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:52 compute-0 podman[81165]: 2026-01-31 07:57:52.086818905 +0000 UTC m=+1.277674273 container start 18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f (image=quay.io/ceph/ceph:v20, name=nifty_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:57:52 compute-0 podman[81165]: 2026-01-31 07:57:52.133068412 +0000 UTC m=+1.323923830 container attach 18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f (image=quay.io/ceph/ceph:v20, name=nifty_mccarthy, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:52 compute-0 sudo[81054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 07:57:52 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:52 compute-0 sudo[81298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:52 compute-0 sudo[81298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:52 compute-0 sudo[81298]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:52 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'selftest'
Jan 31 07:57:52 compute-0 sudo[81323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 07:57:52 compute-0 sudo[81323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:52 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'smb'
Jan 31 07:57:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:52 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:52 compute-0 sudo[81348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:57:52 compute-0 sudo[81348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:52 compute-0 sudo[81348]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:52 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 07:57:52 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:52 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:57:52 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:57:52 compute-0 sudo[81375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:52 compute-0 sudo[81375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:52 compute-0 sudo[81375]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:52 compute-0 sudo[81408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:52 compute-0 sudo[81408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:52 compute-0 sudo[81323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 07:57:52 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'snap_schedule'
Jan 31 07:57:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 07:57:53 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'stats'
Jan 31 07:57:53 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'status'
Jan 31 07:57:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 07:57:53 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'telegraf'
Jan 31 07:57:53 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'telemetry'
Jan 31 07:57:53 compute-0 podman[81457]: 2026-01-31 07:57:53.199257983 +0000 UTC m=+0.029278767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 07:57:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: [cephadm INFO root] Added host compute-0
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 31 07:57:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 07:57:53 compute-0 podman[81457]: 2026-01-31 07:57:53.345821067 +0000 UTC m=+0.175841821 container create cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522 (image=quay.io/ceph/ceph:v20, name=affectionate_hodgkin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:53 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 07:57:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 31 07:57:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:57:53 compute-0 ceph-mon[75294]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:53 compute-0 ceph-mon[75294]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 systemd[1]: Started libpod-conmon-cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522.scope.
Jan 31 07:57:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 31 07:57:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 31 07:57:53 compute-0 podman[81457]: 2026-01-31 07:57:53.646097823 +0000 UTC m=+0.476118657 container init cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522 (image=quay.io/ceph/ceph:v20, name=affectionate_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:57:53 compute-0 podman[81457]: 2026-01-31 07:57:53.652499696 +0000 UTC m=+0.482520440 container start cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522 (image=quay.io/ceph/ceph:v20, name=affectionate_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:57:53 compute-0 affectionate_hodgkin[81473]: 167 167
Jan 31 07:57:53 compute-0 systemd[1]: libpod-cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522.scope: Deactivated successfully.
Jan 31 07:57:53 compute-0 ceph-mgr[80935]: mgr[py] Loading python module 'volumes'
Jan 31 07:57:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:53 compute-0 podman[81457]: 2026-01-31 07:57:53.70961983 +0000 UTC m=+0.539640594 container attach cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522 (image=quay.io/ceph/ceph:v20, name=affectionate_hodgkin, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:57:53 compute-0 podman[81457]: 2026-01-31 07:57:53.711191702 +0000 UTC m=+0.541212436 container died cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522 (image=quay.io/ceph/ceph:v20, name=affectionate_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:53 compute-0 nifty_mccarthy[81224]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 07:57:53 compute-0 nifty_mccarthy[81224]: Scheduled mon update...
Jan 31 07:57:53 compute-0 nifty_mccarthy[81224]: Scheduled mgr update...
Jan 31 07:57:53 compute-0 nifty_mccarthy[81224]: Scheduled osd.default_drive_group update...
Jan 31 07:57:53 compute-0 systemd[1]: libpod-18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f.scope: Deactivated successfully.
Jan 31 07:57:53 compute-0 podman[81165]: 2026-01-31 07:57:53.816371652 +0000 UTC m=+3.007227000 container died 18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f (image=quay.io/ceph/ceph:v20, name=nifty_mccarthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-00ce88b152018190bebf9a19fc8f1c9a2e325a8c2fbb82f8da584ab4484f45fb-merged.mount: Deactivated successfully.
Jan 31 07:57:53 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : Standby manager daemon compute-0.mefzwz started
Jan 31 07:57:53 compute-0 ceph-mgr[80935]: ms_deliver_dispatch: unhandled message 0x55e938ae0000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 07:57:53 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from mgr.compute-0.mefzwz 192.168.122.100:0/1952183972; not ready for session (expect reconnect)
Jan 31 07:57:54 compute-0 podman[81165]: 2026-01-31 07:57:54.041179115 +0000 UTC m=+3.232034463 container remove 18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f (image=quay.io/ceph/ceph:v20, name=nifty_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:54 compute-0 systemd[1]: libpod-conmon-18beed9dc746b027f09d8b5895e941424bc6ddeb8bccd392a77743ea93b6138f.scope: Deactivated successfully.
Jan 31 07:57:54 compute-0 sudo[81162]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-799c2279c241cc213ac499b1ae023dab4a25b0540f9dd1e1cce8f9b9f8bcbc16-merged.mount: Deactivated successfully.
Jan 31 07:57:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:54 compute-0 podman[81457]: 2026-01-31 07:57:54.231408328 +0000 UTC m=+1.061429082 container remove cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522 (image=quay.io/ceph/ceph:v20, name=affectionate_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:54 compute-0 systemd[1]: libpod-conmon-cc0f4346aa80dcba281f1610423ffd8a63e374a511d7fb96aefdf165efa30522.scope: Deactivated successfully.
Jan 31 07:57:54 compute-0 sudo[81531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkjzadozqmdiwwwhhyvifmlizlblcuc ; /usr/bin/python3'
Jan 31 07:57:54 compute-0 sudo[81531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:54 compute-0 sudo[81408]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:54 compute-0 python3[81533]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:57:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.lhuavc (unknown last config time)...
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.lhuavc (unknown last config time)...
Jan 31 07:57:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.lhuavc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 07:57:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.lhuavc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 07:57:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 07:57:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr services"} : dispatch
Jan 31 07:57:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.lhuavc on compute-0
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.lhuavc on compute-0
Jan 31 07:57:54 compute-0 sudo[81542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:54 compute-0 sudo[81542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:54 compute-0 sudo[81542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:54 compute-0 podman[81535]: 2026-01-31 07:57:54.532481424 +0000 UTC m=+0.079211955 container create 7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af (image=quay.io/ceph/ceph:v20, name=strange_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:54 compute-0 sudo[81572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:57:54 compute-0 sudo[81572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:54 compute-0 ceph-mon[75294]: Added host compute-0
Jan 31 07:57:54 compute-0 ceph-mon[75294]: Saving service mon spec with placement compute-0
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:54 compute-0 ceph-mon[75294]: Saving service mgr spec with placement compute-0
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:54 compute-0 ceph-mon[75294]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 07:57:54 compute-0 ceph-mon[75294]: Saving service osd.default_drive_group spec with placement compute-0
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:54 compute-0 ceph-mon[75294]: Standby manager daemon compute-0.mefzwz started
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.lhuavc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr services"} : dispatch
Jan 31 07:57:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:54 compute-0 podman[81535]: 2026-01-31 07:57:54.473589213 +0000 UTC m=+0.020319764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:54 compute-0 systemd[1]: Started libpod-conmon-7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af.scope.
Jan 31 07:57:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2dda647bf7924aa8a9f5a6da8b26409d399a7d532e18d36c3f5728f2a05832/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2dda647bf7924aa8a9f5a6da8b26409d399a7d532e18d36c3f5728f2a05832/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f2dda647bf7924aa8a9f5a6da8b26409d399a7d532e18d36c3f5728f2a05832/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:54 compute-0 podman[81535]: 2026-01-31 07:57:54.675538184 +0000 UTC m=+0.222268735 container init 7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af (image=quay.io/ceph/ceph:v20, name=strange_sammet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:54 compute-0 podman[81535]: 2026-01-31 07:57:54.679675966 +0000 UTC m=+0.226406497 container start 7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af (image=quay.io/ceph/ceph:v20, name=strange_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:54 compute-0 podman[81535]: 2026-01-31 07:57:54.688235799 +0000 UTC m=+0.234966360 container attach 7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af (image=quay.io/ceph/ceph:v20, name=strange_sammet, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:57:54 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.lhuavc(active, since 64s), standbys: compute-0.mefzwz
Jan 31 07:57:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mefzwz", "id": "compute-0.mefzwz"} v 0)
Jan 31 07:57:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr metadata", "who": "compute-0.mefzwz", "id": "compute-0.mefzwz"} : dispatch
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:54 compute-0 podman[81622]: 2026-01-31 07:57:54.850891501 +0000 UTC m=+0.077165069 container create 81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4 (image=quay.io/ceph/ceph:v20, name=exciting_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:57:54 compute-0 podman[81622]: 2026-01-31 07:57:54.804852159 +0000 UTC m=+0.031125707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:57:54 compute-0 systemd[1]: Started libpod-conmon-81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4.scope.
Jan 31 07:57:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:55 compute-0 podman[81622]: 2026-01-31 07:57:55.004257192 +0000 UTC m=+0.230530720 container init 81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4 (image=quay.io/ceph/ceph:v20, name=exciting_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:57:55 compute-0 podman[81622]: 2026-01-31 07:57:55.009055732 +0000 UTC m=+0.235329260 container start 81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4 (image=quay.io/ceph/ceph:v20, name=exciting_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:55 compute-0 exciting_euclid[81657]: 167 167
Jan 31 07:57:55 compute-0 systemd[1]: libpod-81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4.scope: Deactivated successfully.
Jan 31 07:57:55 compute-0 conmon[81657]: conmon 81e018c4d4d1493d0d22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4.scope/container/memory.events
Jan 31 07:57:55 compute-0 podman[81622]: 2026-01-31 07:57:55.061829587 +0000 UTC m=+0.288103115 container attach 81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4 (image=quay.io/ceph/ceph:v20, name=exciting_euclid, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:55 compute-0 podman[81622]: 2026-01-31 07:57:55.062832185 +0000 UTC m=+0.289105723 container died 81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4 (image=quay.io/ceph/ceph:v20, name=exciting_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:57:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 07:57:55 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911207252' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:57:55 compute-0 strange_sammet[81601]: 
Jan 31 07:57:55 compute-0 strange_sammet[81601]: {"fsid":"dc03f344-536f-5591-add9-31059f42637c","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":86,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-31T07:56:24:478364+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T07:57:52.741188+0000","services":{}},"progress_events":{}}
Jan 31 07:57:55 compute-0 systemd[1]: libpod-7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af.scope: Deactivated successfully.
Jan 31 07:57:55 compute-0 podman[81535]: 2026-01-31 07:57:55.238290015 +0000 UTC m=+0.785020546 container died 7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af (image=quay.io/ceph/ceph:v20, name=strange_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-25bd967e06016f7cb3fa478aabfe2cc6ac168c9a9427266bc75220e3999a20e8-merged.mount: Deactivated successfully.
Jan 31 07:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f2dda647bf7924aa8a9f5a6da8b26409d399a7d532e18d36c3f5728f2a05832-merged.mount: Deactivated successfully.
Jan 31 07:57:55 compute-0 podman[81535]: 2026-01-31 07:57:55.466823569 +0000 UTC m=+1.013554130 container remove 7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af (image=quay.io/ceph/ceph:v20, name=strange_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:57:55 compute-0 sudo[81531]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:55 compute-0 podman[81622]: 2026-01-31 07:57:55.506903329 +0000 UTC m=+0.733176857 container remove 81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4 (image=quay.io/ceph/ceph:v20, name=exciting_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 07:57:55 compute-0 systemd[1]: libpod-conmon-7c401f24e8cb9d5a5235a2e8e4db06cedcb3b3e6cb7d13e33f5a9e127690e7af.scope: Deactivated successfully.
Jan 31 07:57:55 compute-0 systemd[1]: libpod-conmon-81e018c4d4d1493d0d223a9e3547b35188d8dfd86de758226e9f167d624191e4.scope: Deactivated successfully.
Jan 31 07:57:55 compute-0 sudo[81572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:55 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:55 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:55 compute-0 sudo[81689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:55 compute-0 sudo[81689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:55 compute-0 sudo[81689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:55 compute-0 ceph-mon[75294]: Reconfiguring mgr.compute-0.lhuavc (unknown last config time)...
Jan 31 07:57:55 compute-0 ceph-mon[75294]: Reconfiguring daemon mgr.compute-0.lhuavc on compute-0
Jan 31 07:57:55 compute-0 ceph-mon[75294]: mgrmap e10: compute-0.lhuavc(active, since 64s), standbys: compute-0.mefzwz
Jan 31 07:57:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mgr metadata", "who": "compute-0.mefzwz", "id": "compute-0.mefzwz"} : dispatch
Jan 31 07:57:55 compute-0 ceph-mon[75294]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:55 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2911207252' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:57:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:55 compute-0 sudo[81714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 07:57:55 compute-0 sudo[81714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:56 compute-0 podman[81783]: 2026-01-31 07:57:56.200830517 +0000 UTC m=+0.107561906 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:56 compute-0 podman[81783]: 2026-01-31 07:57:56.314079047 +0000 UTC m=+0.220810396 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 31 07:57:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:56 compute-0 sudo[81714]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:56 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 07:57:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:56 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev a1a6047b-3bc4-4915-a1ea-5088e6a79844 (Updating mgr deployment (-1 -> 1))
Jan 31 07:57:56 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.mefzwz from compute-0 -- ports [8765]
Jan 31 07:57:56 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.mefzwz from compute-0 -- ports [8765]
Jan 31 07:57:56 compute-0 sudo[81898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:57 compute-0 sudo[81898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:57 compute-0 sudo[81898]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:57 compute-0 sudo[81923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid dc03f344-536f-5591-add9-31059f42637c --name mgr.compute-0.mefzwz --force --tcp-ports 8765
Jan 31 07:57:57 compute-0 sudo[81923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:57 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.mefzwz for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:57:57 compute-0 podman[81988]: 2026-01-31 07:57:57.528338673 +0000 UTC m=+0.099373393 container died 38dc5ae2c8adf343bb973e448e918cc4fadbdc675013cd9b9f8bf8ca17828017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 07:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fda37ff3ea497970f64c56eb07fa801ad27940c48e4a859035d6f8b924ffe3b7-merged.mount: Deactivated successfully.
Jan 31 07:57:57 compute-0 podman[81988]: 2026-01-31 07:57:57.735607449 +0000 UTC m=+0.306642119 container remove 38dc5ae2c8adf343bb973e448e918cc4fadbdc675013cd9b9f8bf8ca17828017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:57:57 compute-0 bash[81988]: ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-mefzwz
Jan 31 07:57:57 compute-0 systemd[1]: ceph-dc03f344-536f-5591-add9-31059f42637c@mgr.compute-0.mefzwz.service: Main process exited, code=exited, status=143/n/a
Jan 31 07:57:57 compute-0 systemd[1]: ceph-dc03f344-536f-5591-add9-31059f42637c@mgr.compute-0.mefzwz.service: Failed with result 'exit-code'.
Jan 31 07:57:57 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.mefzwz for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:57:57 compute-0 systemd[1]: ceph-dc03f344-536f-5591-add9-31059f42637c@mgr.compute-0.mefzwz.service: Consumed 7.027s CPU time, 466.1M memory peak, read 0B from disk, written 164.0K to disk.
Jan 31 07:57:57 compute-0 systemd[1]: Reloading.
Jan 31 07:57:57 compute-0 systemd-rc-local-generator[82076]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:57:57 compute-0 systemd-sysv-generator[82081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:57:57 compute-0 ceph-mon[75294]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:58 compute-0 sudo[81923]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:58 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.mefzwz
Jan 31 07:57:58 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.mefzwz
Jan 31 07:57:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.mefzwz"} v 0)
Jan 31 07:57:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.mefzwz"} : dispatch
Jan 31 07:57:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mefzwz"}]': finished
Jan 31 07:57:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 07:57:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:58 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev a1a6047b-3bc4-4915-a1ea-5088e6a79844 (Updating mgr deployment (-1 -> 1))
Jan 31 07:57:58 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event a1a6047b-3bc4-4915-a1ea-5088e6a79844 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Jan 31 07:57:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 07:57:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 07:57:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:57:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 07:57:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:57:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:57:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:58 compute-0 sudo[82092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:58 compute-0 sudo[82092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:58 compute-0 sudo[82092]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:58 compute-0 sudo[82117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 07:57:58 compute-0 sudo[82117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:58 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:57:59 compute-0 podman[82155]: 2026-01-31 07:57:58.977014143 +0000 UTC m=+0.019332166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:57:59 compute-0 ceph-mon[75294]: Removing daemon mgr.compute-0.mefzwz from compute-0 -- ports [8765]
Jan 31 07:57:59 compute-0 ceph-mon[75294]: Removing key for mgr.compute-0.mefzwz
Jan 31 07:57:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.mefzwz"} : dispatch
Jan 31 07:57:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mefzwz"}]': finished
Jan 31 07:57:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:57:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:57:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:57:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:57:59 compute-0 ceph-mon[75294]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:57:59 compute-0 podman[82155]: 2026-01-31 07:57:59.093147442 +0000 UTC m=+0.135465455 container create 877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:59 compute-0 systemd[1]: Started libpod-conmon-877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195.scope.
Jan 31 07:57:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:57:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:59 compute-0 podman[82155]: 2026-01-31 07:57:59.392230025 +0000 UTC m=+0.434548068 container init 877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:57:59 compute-0 podman[82155]: 2026-01-31 07:57:59.401006363 +0000 UTC m=+0.443324386 container start 877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:57:59 compute-0 sharp_yonath[82171]: 167 167
Jan 31 07:57:59 compute-0 systemd[1]: libpod-877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195.scope: Deactivated successfully.
Jan 31 07:57:59 compute-0 podman[82155]: 2026-01-31 07:57:59.429203399 +0000 UTC m=+0.471521472 container attach 877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 07:57:59 compute-0 podman[82155]: 2026-01-31 07:57:59.43070855 +0000 UTC m=+0.473026603 container died 877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 07:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-442060df9a3fd350b5a085f0fd45073d13d943b503e069102fdb3a643b1fb402-merged.mount: Deactivated successfully.
Jan 31 07:57:59 compute-0 podman[82155]: 2026-01-31 07:57:59.904607896 +0000 UTC m=+0.946925909 container remove 877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:59 compute-0 systemd[1]: libpod-conmon-877a0be8ccd49677f69962c906568d36b35ae08b1536bc6e910a500311a17195.scope: Deactivated successfully.
Jan 31 07:58:00 compute-0 podman[82198]: 2026-01-31 07:58:00.152229708 +0000 UTC m=+0.119272874 container create 7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:00 compute-0 podman[82198]: 2026-01-31 07:58:00.057101141 +0000 UTC m=+0.024144347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:00 compute-0 systemd[1]: Started libpod-conmon-7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb.scope.
Jan 31 07:58:00 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 3 completed events
Jan 31 07:58:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 07:58:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8a4f9c4fd16cc35858c4095ca137cb9975c684d52eb00c4a423703bda50e09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8a4f9c4fd16cc35858c4095ca137cb9975c684d52eb00c4a423703bda50e09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8a4f9c4fd16cc35858c4095ca137cb9975c684d52eb00c4a423703bda50e09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8a4f9c4fd16cc35858c4095ca137cb9975c684d52eb00c4a423703bda50e09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8a4f9c4fd16cc35858c4095ca137cb9975c684d52eb00c4a423703bda50e09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:00 compute-0 podman[82198]: 2026-01-31 07:58:00.355812074 +0000 UTC m=+0.322855330 container init 7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hawking, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:58:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:00 compute-0 podman[82198]: 2026-01-31 07:58:00.363291157 +0000 UTC m=+0.330334353 container start 7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hawking, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:00 compute-0 podman[82198]: 2026-01-31 07:58:00.50720012 +0000 UTC m=+0.474243396 container attach 7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 07:58:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:00 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:01 compute-0 agitated_hawking[82214]: --> passed data devices: 0 physical, 3 LVM
Jan 31 07:58:01 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:01 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:01 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 138b43d4-6b22-4784-83a9-3b3a12b6e8dd
Jan 31 07:58:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd"} v 0)
Jan 31 07:58:01 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2274084998' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd"} : dispatch
Jan 31 07:58:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 07:58:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:01 compute-0 ceph-mon[75294]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:01 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2274084998' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd"}]': finished
Jan 31 07:58:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 07:58:01 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 07:58:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:01 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:01 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:01 compute-0 agitated_hawking[82214]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 07:58:02 compute-0 lvm[82308]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:02 compute-0 lvm[82308]: VG ceph_vg0 finished
Jan 31 07:58:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 07:58:02 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002354491' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 07:58:02 compute-0 agitated_hawking[82214]:  stderr: got monmap epoch 1
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: --> Creating keyring file for osd.0
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 07:58:02 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 138b43d4-6b22-4784-83a9-3b3a12b6e8dd --setuser ceph --setgroup ceph
Jan 31 07:58:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2274084998' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd"} : dispatch
Jan 31 07:58:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2274084998' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd"}]': finished
Jan 31 07:58:02 compute-0 ceph-mon[75294]: osdmap e4: 1 total, 0 up, 1 in
Jan 31 07:58:02 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3002354491' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 07:58:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:02 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:03 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 07:58:03 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 07:58:03 compute-0 ceph-mon[75294]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:04 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:04 compute-0 ceph-mon[75294]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 07:58:04 compute-0 ceph-mon[75294]: Cluster is now healthy
Jan 31 07:58:06 compute-0 ceph-mon[75294]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:06 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:06 compute-0 agitated_hawking[82214]:  stderr: 2026-01-31T07:58:02.635+0000 7f16361ee8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 31 07:58:06 compute-0 agitated_hawking[82214]:  stderr: 2026-01-31T07:58:02.659+0000 7f16361ee8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 07:58:06 compute-0 agitated_hawking[82214]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 07:58:06 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:58:06 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4d185ab0-8a71-40fb-b34c-388b2e694746
Jan 31 07:58:07 compute-0 ceph-mon[75294]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4d185ab0-8a71-40fb-b34c-388b2e694746"} v 0)
Jan 31 07:58:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196461300' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4d185ab0-8a71-40fb-b34c-388b2e694746"} : dispatch
Jan 31 07:58:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 07:58:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196461300' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4d185ab0-8a71-40fb-b34c-388b2e694746"}]': finished
Jan 31 07:58:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 07:58:07 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 07:58:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:07 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:07 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:07 compute-0 lvm[83242]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 31 07:58:07 compute-0 lvm[83242]: VG ceph_vg1 finished
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:07 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 31 07:58:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 07:58:08 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046391161' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 07:58:08 compute-0 agitated_hawking[82214]:  stderr: got monmap epoch 1
Jan 31 07:58:08 compute-0 agitated_hawking[82214]: --> Creating keyring file for osd.1
Jan 31 07:58:08 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 31 07:58:08 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 31 07:58:08 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 4d185ab0-8a71-40fb-b34c-388b2e694746 --setuser ceph --setgroup ceph
Jan 31 07:58:08 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/196461300' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4d185ab0-8a71-40fb-b34c-388b2e694746"} : dispatch
Jan 31 07:58:08 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/196461300' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4d185ab0-8a71-40fb-b34c-388b2e694746"}]': finished
Jan 31 07:58:08 compute-0 ceph-mon[75294]: osdmap e5: 2 total, 0 up, 2 in
Jan 31 07:58:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:08 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1046391161' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 07:58:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:08 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:09 compute-0 ceph-mon[75294]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:10 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:11 compute-0 agitated_hawking[82214]:  stderr: 2026-01-31T07:58:08.563+0000 7f68aa1ed8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 31 07:58:11 compute-0 agitated_hawking[82214]:  stderr: 2026-01-31T07:58:08.586+0000 7f68aa1ed8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:11 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 39d89c18-9d94-4e5d-ba4b-7f289542d53c
Jan 31 07:58:11 compute-0 ceph-mon[75294]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c"} v 0)
Jan 31 07:58:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1296388502' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c"} : dispatch
Jan 31 07:58:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 07:58:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1296388502' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c"}]': finished
Jan 31 07:58:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 31 07:58:12 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 31 07:58:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:12 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:12 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:12 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 31 07:58:12 compute-0 lvm[84191]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:58:12 compute-0 lvm[84191]: VG ceph_vg2 finished
Jan 31 07:58:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 07:58:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/237350101' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 07:58:12 compute-0 agitated_hawking[82214]:  stderr: got monmap epoch 1
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: --> Creating keyring file for osd.2
Jan 31 07:58:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 31 07:58:12 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:12 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 39d89c18-9d94-4e5d-ba4b-7f289542d53c --setuser ceph --setgroup ceph
Jan 31 07:58:13 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1296388502' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c"} : dispatch
Jan 31 07:58:13 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1296388502' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c"}]': finished
Jan 31 07:58:13 compute-0 ceph-mon[75294]: osdmap e6: 3 total, 0 up, 3 in
Jan 31 07:58:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:13 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/237350101' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 07:58:14 compute-0 ceph-mon[75294]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:14 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:15 compute-0 ceph-mon[75294]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:16 compute-0 agitated_hawking[82214]:  stderr: 2026-01-31T07:58:12.824+0000 7f15e81468c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 31 07:58:16 compute-0 agitated_hawking[82214]:  stderr: 2026-01-31T07:58:12.849+0000 7f15e81468c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 07:58:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:16 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 07:58:16 compute-0 agitated_hawking[82214]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 31 07:58:16 compute-0 systemd[1]: libpod-7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb.scope: Deactivated successfully.
Jan 31 07:58:16 compute-0 podman[82198]: 2026-01-31 07:58:16.853844936 +0000 UTC m=+16.820888142 container died 7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hawking, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:16 compute-0 systemd[1]: libpod-7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb.scope: Consumed 5.573s CPU time.
Jan 31 07:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba8a4f9c4fd16cc35858c4095ca137cb9975c684d52eb00c4a423703bda50e09-merged.mount: Deactivated successfully.
Jan 31 07:58:16 compute-0 podman[82198]: 2026-01-31 07:58:16.900792512 +0000 UTC m=+16.867835688 container remove 7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:16 compute-0 systemd[1]: libpod-conmon-7857e1740649c3866d609c06305fb15c9e9df0e04c02703cab63083c41a94bdb.scope: Deactivated successfully.
Jan 31 07:58:16 compute-0 sudo[82117]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:16 compute-0 sudo[85108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:16 compute-0 sudo[85108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:16 compute-0 sudo[85108]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:17 compute-0 sudo[85133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 07:58:17 compute-0 sudo[85133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:17 compute-0 podman[85170]: 2026-01-31 07:58:17.318286334 +0000 UTC m=+0.051965614 container create c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bhabha, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:58:17 compute-0 systemd[1]: Started libpod-conmon-c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63.scope.
Jan 31 07:58:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:17 compute-0 podman[85170]: 2026-01-31 07:58:17.294045965 +0000 UTC m=+0.027725285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:17 compute-0 podman[85170]: 2026-01-31 07:58:17.395045311 +0000 UTC m=+0.128724581 container init c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:58:17 compute-0 podman[85170]: 2026-01-31 07:58:17.403935093 +0000 UTC m=+0.137614373 container start c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:58:17 compute-0 clever_bhabha[85186]: 167 167
Jan 31 07:58:17 compute-0 systemd[1]: libpod-c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63.scope: Deactivated successfully.
Jan 31 07:58:17 compute-0 podman[85170]: 2026-01-31 07:58:17.409110293 +0000 UTC m=+0.142789623 container attach c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:58:17 compute-0 podman[85170]: 2026-01-31 07:58:17.409609107 +0000 UTC m=+0.143288377 container died c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e21816370afc0a0b259f1eeb0da4002d00f9983d5b4dc9fd1f04726bd9c4c6d0-merged.mount: Deactivated successfully.
Jan 31 07:58:17 compute-0 podman[85170]: 2026-01-31 07:58:17.450666733 +0000 UTC m=+0.184345983 container remove c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bhabha, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:17 compute-0 systemd[1]: libpod-conmon-c8defed8ffd0b4c4ac086f0382db570c33a4885aac8a400d1f1633821f7a1d63.scope: Deactivated successfully.
Jan 31 07:58:17 compute-0 podman[85211]: 2026-01-31 07:58:17.580827123 +0000 UTC m=+0.035748693 container create a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cannon, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 07:58:17 compute-0 systemd[1]: Started libpod-conmon-a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd.scope.
Jan 31 07:58:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0eba5f9627107c35b189ec5c66a096732218a66d30fa8ca41a69b5db499fb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0eba5f9627107c35b189ec5c66a096732218a66d30fa8ca41a69b5db499fb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0eba5f9627107c35b189ec5c66a096732218a66d30fa8ca41a69b5db499fb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0eba5f9627107c35b189ec5c66a096732218a66d30fa8ca41a69b5db499fb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:17 compute-0 podman[85211]: 2026-01-31 07:58:17.649448018 +0000 UTC m=+0.104369618 container init a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:17 compute-0 podman[85211]: 2026-01-31 07:58:17.654572028 +0000 UTC m=+0.109493598 container start a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:17 compute-0 podman[85211]: 2026-01-31 07:58:17.658811253 +0000 UTC m=+0.113732933 container attach a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:17 compute-0 podman[85211]: 2026-01-31 07:58:17.563991845 +0000 UTC m=+0.018913455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:17 compute-0 ceph-mon[75294]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:17 compute-0 sharp_cannon[85227]: {
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:     "0": [
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:         {
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "devices": [
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "/dev/loop3"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             ],
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_name": "ceph_lv0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_size": "21470642176",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "name": "ceph_lv0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "tags": {
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.crush_device_class": "",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.encrypted": "0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osd_id": "0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.type": "block",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.vdo": "0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.with_tpm": "0"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             },
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "type": "block",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "vg_name": "ceph_vg0"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:         }
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:     ],
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:     "1": [
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:         {
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "devices": [
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "/dev/loop4"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             ],
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_name": "ceph_lv1",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_size": "21470642176",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "name": "ceph_lv1",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "tags": {
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.crush_device_class": "",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.encrypted": "0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osd_id": "1",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.type": "block",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.vdo": "0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.with_tpm": "0"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             },
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "type": "block",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "vg_name": "ceph_vg1"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:         }
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:     ],
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:     "2": [
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:         {
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "devices": [
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "/dev/loop5"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             ],
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_name": "ceph_lv2",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_size": "21470642176",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "name": "ceph_lv2",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "tags": {
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.crush_device_class": "",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.encrypted": "0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osd_id": "2",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.type": "block",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.vdo": "0",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:                 "ceph.with_tpm": "0"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             },
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "type": "block",
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:             "vg_name": "ceph_vg2"
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:         }
Jan 31 07:58:17 compute-0 sharp_cannon[85227]:     ]
Jan 31 07:58:17 compute-0 sharp_cannon[85227]: }
Jan 31 07:58:17 compute-0 systemd[1]: libpod-a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd.scope: Deactivated successfully.
Jan 31 07:58:17 compute-0 podman[85236]: 2026-01-31 07:58:17.947851632 +0000 UTC m=+0.023365906 container died a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 07:58:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f0eba5f9627107c35b189ec5c66a096732218a66d30fa8ca41a69b5db499fb6-merged.mount: Deactivated successfully.
Jan 31 07:58:17 compute-0 podman[85236]: 2026-01-31 07:58:17.98676396 +0000 UTC m=+0.062278184 container remove a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_cannon, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 07:58:17 compute-0 systemd[1]: libpod-conmon-a2277a726e1ca5b46d0e114dba48e649988ebc4cc90f35066a0adf8ff1b42ffd.scope: Deactivated successfully.
Jan 31 07:58:18 compute-0 sudo[85133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 31 07:58:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 07:58:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:58:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:18 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 07:58:18 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 07:58:18 compute-0 sudo[85251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:18 compute-0 sudo[85251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:18 compute-0 sudo[85251]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:18 compute-0 sudo[85276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:58:18 compute-0 sudo[85276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:18 compute-0 podman[85342]: 2026-01-31 07:58:18.445265016 +0000 UTC m=+0.029158946 container create 7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:58:18 compute-0 systemd[1]: Started libpod-conmon-7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e.scope.
Jan 31 07:58:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:18 compute-0 podman[85342]: 2026-01-31 07:58:18.494316401 +0000 UTC m=+0.078210371 container init 7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_benz, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:18 compute-0 podman[85342]: 2026-01-31 07:58:18.499358555 +0000 UTC m=+0.083252495 container start 7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:18 compute-0 charming_benz[85357]: 167 167
Jan 31 07:58:18 compute-0 podman[85342]: 2026-01-31 07:58:18.503175622 +0000 UTC m=+0.087069572 container attach 7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 07:58:18 compute-0 systemd[1]: libpod-7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e.scope: Deactivated successfully.
Jan 31 07:58:18 compute-0 podman[85342]: 2026-01-31 07:58:18.50378461 +0000 UTC m=+0.087678580 container died 7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:58:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c96d412082f117e1b369839c3cdaa6623efab367893d2ad92c1dfde903a5dec2-merged.mount: Deactivated successfully.
Jan 31 07:58:18 compute-0 podman[85342]: 2026-01-31 07:58:18.432071091 +0000 UTC m=+0.015965051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:18 compute-0 podman[85342]: 2026-01-31 07:58:18.540530368 +0000 UTC m=+0.124424328 container remove 7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_benz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:18 compute-0 systemd[1]: libpod-conmon-7453e409c2acb518b1887a79949ebe6981fffd0d19eef58289d6f588f03dbd1e.scope: Deactivated successfully.
Jan 31 07:58:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:18 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:18 compute-0 podman[85387]: 2026-01-31 07:58:18.811578582 +0000 UTC m=+0.058361681 container create ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:58:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 07:58:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:18 compute-0 systemd[1]: Started libpod-conmon-ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5.scope.
Jan 31 07:58:18 compute-0 podman[85387]: 2026-01-31 07:58:18.787147293 +0000 UTC m=+0.033930442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3282b84545353f06b9ad7b39b9a4a85496f43e3503c0e9a2eb1bc8686827ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3282b84545353f06b9ad7b39b9a4a85496f43e3503c0e9a2eb1bc8686827ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3282b84545353f06b9ad7b39b9a4a85496f43e3503c0e9a2eb1bc8686827ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3282b84545353f06b9ad7b39b9a4a85496f43e3503c0e9a2eb1bc8686827ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3282b84545353f06b9ad7b39b9a4a85496f43e3503c0e9a2eb1bc8686827ee/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:18 compute-0 podman[85387]: 2026-01-31 07:58:18.909699372 +0000 UTC m=+0.156482461 container init ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:18 compute-0 podman[85387]: 2026-01-31 07:58:18.917295086 +0000 UTC m=+0.164078145 container start ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:58:18 compute-0 podman[85387]: 2026-01-31 07:58:18.922814495 +0000 UTC m=+0.169597654 container attach ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 07:58:19 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test[85403]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 07:58:19 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test[85403]:                             [--no-systemd] [--no-tmpfs]
Jan 31 07:58:19 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test[85403]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 07:58:19 compute-0 systemd[1]: libpod-ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5.scope: Deactivated successfully.
Jan 31 07:58:19 compute-0 podman[85387]: 2026-01-31 07:58:19.102445705 +0000 UTC m=+0.349228774 container died ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f3282b84545353f06b9ad7b39b9a4a85496f43e3503c0e9a2eb1bc8686827ee-merged.mount: Deactivated successfully.
Jan 31 07:58:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:19 compute-0 podman[85387]: 2026-01-31 07:58:19.213946496 +0000 UTC m=+0.460729555 container remove ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate-test, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:58:19 compute-0 systemd[1]: libpod-conmon-ee9f12c780a8c4610ab5a6d54a3100a0a29dc4a4e404106ebf81e99abadbd9f5.scope: Deactivated successfully.
Jan 31 07:58:19 compute-0 systemd[1]: Reloading.
Jan 31 07:58:19 compute-0 systemd-rc-local-generator[85470]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:58:19 compute-0 systemd-sysv-generator[85473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:58:19 compute-0 systemd[1]: Reloading.
Jan 31 07:58:19 compute-0 systemd-sysv-generator[85511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:58:19 compute-0 systemd-rc-local-generator[85508]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:58:19 compute-0 ceph-mon[75294]: Deploying daemon osd.0 on compute-0
Jan 31 07:58:19 compute-0 ceph-mon[75294]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:19 compute-0 systemd[1]: Starting Ceph osd.0 for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:58:20 compute-0 podman[85570]: 2026-01-31 07:58:20.122612609 +0000 UTC m=+0.091177408 container create 7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:20 compute-0 podman[85570]: 2026-01-31 07:58:20.050143667 +0000 UTC m=+0.018708486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f440c7a3c35935852fdd47a7f820020675b34fe1ab17f1e38c07b4afb562a99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f440c7a3c35935852fdd47a7f820020675b34fe1ab17f1e38c07b4afb562a99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f440c7a3c35935852fdd47a7f820020675b34fe1ab17f1e38c07b4afb562a99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f440c7a3c35935852fdd47a7f820020675b34fe1ab17f1e38c07b4afb562a99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f440c7a3c35935852fdd47a7f820020675b34fe1ab17f1e38c07b4afb562a99/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:20 compute-0 podman[85570]: 2026-01-31 07:58:20.258483248 +0000 UTC m=+0.227048077 container init 7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 07:58:20 compute-0 podman[85570]: 2026-01-31 07:58:20.266245196 +0000 UTC m=+0.234809995 container start 7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:20 compute-0 podman[85570]: 2026-01-31 07:58:20.295080371 +0000 UTC m=+0.263645170 container attach 7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:58:20 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:20 compute-0 bash[85570]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:20 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:20 compute-0 bash[85570]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:20 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:20 compute-0 lvm[85668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:20 compute-0 lvm[85668]: VG ceph_vg0 finished
Jan 31 07:58:20 compute-0 lvm[85671]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:58:20 compute-0 lvm[85671]: VG ceph_vg1 finished
Jan 31 07:58:20 compute-0 lvm[85673]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:58:20 compute-0 lvm[85673]: VG ceph_vg2 finished
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:21 compute-0 bash[85570]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:58:21 compute-0 bash[85570]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:58:21 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate[85585]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 07:58:21 compute-0 bash[85570]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 07:58:21 compute-0 systemd[1]: libpod-7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874.scope: Deactivated successfully.
Jan 31 07:58:21 compute-0 systemd[1]: libpod-7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874.scope: Consumed 1.188s CPU time.
Jan 31 07:58:21 compute-0 podman[85570]: 2026-01-31 07:58:21.22609879 +0000 UTC m=+1.194663609 container died 7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f440c7a3c35935852fdd47a7f820020675b34fe1ab17f1e38c07b4afb562a99-merged.mount: Deactivated successfully.
Jan 31 07:58:21 compute-0 podman[85570]: 2026-01-31 07:58:21.28900211 +0000 UTC m=+1.257566909 container remove 7a6a7313459cd1de29ff0d8db6eb04c63b889d58890cdd35df5317a134de3874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:21 compute-0 podman[85845]: 2026-01-31 07:58:21.426030242 +0000 UTC m=+0.032604070 container create 4e80450fb78be4db424d35e78e342d9c2cf1a12410a5ef06cf5341531f2dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c47f5b95405b8c4d7c730c7d70b0a7edddb9d57914e4e00d86eea59a03f61f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c47f5b95405b8c4d7c730c7d70b0a7edddb9d57914e4e00d86eea59a03f61f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c47f5b95405b8c4d7c730c7d70b0a7edddb9d57914e4e00d86eea59a03f61f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c47f5b95405b8c4d7c730c7d70b0a7edddb9d57914e4e00d86eea59a03f61f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c47f5b95405b8c4d7c730c7d70b0a7edddb9d57914e4e00d86eea59a03f61f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:21 compute-0 podman[85845]: 2026-01-31 07:58:21.481095252 +0000 UTC m=+0.087669100 container init 4e80450fb78be4db424d35e78e342d9c2cf1a12410a5ef06cf5341531f2dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:21 compute-0 podman[85845]: 2026-01-31 07:58:21.487382045 +0000 UTC m=+0.093955873 container start 4e80450fb78be4db424d35e78e342d9c2cf1a12410a5ef06cf5341531f2dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:21 compute-0 bash[85845]: 4e80450fb78be4db424d35e78e342d9c2cf1a12410a5ef06cf5341531f2dc1e4
Jan 31 07:58:21 compute-0 podman[85845]: 2026-01-31 07:58:21.41256162 +0000 UTC m=+0.019135468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:21 compute-0 systemd[1]: Started Ceph osd.0 for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:58:21 compute-0 ceph-osd[85864]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: pidfile_write: ignore empty --pid-file
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 sudo[85276]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:21 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 31 07:58:21 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 07:58:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:58:21 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:21 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 31 07:58:21 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 sudo[85880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:21 compute-0 sudo[85880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:21 compute-0 sudo[85880]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 sudo[85911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:58:21 compute-0 sudo[85911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 07:58:21 compute-0 ceph-osd[85864]: load: jerasure load: lrc 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 07:58:21 compute-0 ceph-osd[85864]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-mon[75294]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 07:58:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6a1f3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount shared_bdev_used = 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Git sha 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: DB SUMMARY
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: DB Session ID:  OGF69LD8IX8PYSOPGSGL
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                     Options.env: 0x563a6a083ea0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                Options.info_log: 0x563a6b10a8a0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.write_buffer_manager: 0x563a6a0e4b40
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.row_cache: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                              Options.wal_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.wal_compression: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Compression algorithms supported:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kZSTD supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b10ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2ac022b6-fc19-4201-a401-7720b43c4ac0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846301890272, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846301891164, "job": 1, "event": "recovery_finished"}
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: freelist init
Jan 31 07:58:21 compute-0 ceph-osd[85864]: freelist _read_cfg
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs umount
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bdev(0x563a6ae89800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluefs mount shared_bdev_used = 27262976
Jan 31 07:58:21 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Git sha 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: DB SUMMARY
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: DB Session ID:  OGF69LD8IX8PYSOPGSGK
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                     Options.env: 0x563a6aecff80
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                Options.info_log: 0x563a6b10bc60
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.write_buffer_manager: 0x563a6a0e5900
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.row_cache: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                              Options.wal_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.wal_compression: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Compression algorithms supported:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kZSTD supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a087a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e360)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0874b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e360)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0874b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563a6b13e360)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563a6a0874b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2ac022b6-fc19-4201-a401-7720b43c4ac0
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846301936746, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846301946898, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846301, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2ac022b6-fc19-4201-a401-7720b43c4ac0", "db_session_id": "OGF69LD8IX8PYSOPGSGK", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846301951534, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846301, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2ac022b6-fc19-4201-a401-7720b43c4ac0", "db_session_id": "OGF69LD8IX8PYSOPGSGK", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846301959004, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846301, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2ac022b6-fc19-4201-a401-7720b43c4ac0", "db_session_id": "OGF69LD8IX8PYSOPGSGK", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846301961437, "job": 1, "event": "recovery_finished"}
Jan 31 07:58:21 compute-0 ceph-osd[85864]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 07:58:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563a6b2efc00
Jan 31 07:58:22 compute-0 ceph-osd[85864]: rocksdb: DB pointer 0x563a6b2c4000
Jan 31 07:58:22 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:58:22 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 07:58:22 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 07:58:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:58:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:58:22 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 07:58:22 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 07:58:22 compute-0 ceph-osd[85864]: _get_class not permitted to load lua
Jan 31 07:58:22 compute-0 ceph-osd[85864]: _get_class not permitted to load sdk
Jan 31 07:58:22 compute-0 ceph-osd[85864]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 07:58:22 compute-0 ceph-osd[85864]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 07:58:22 compute-0 ceph-osd[85864]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 07:58:22 compute-0 ceph-osd[85864]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 07:58:22 compute-0 ceph-osd[85864]: osd.0 0 load_pgs
Jan 31 07:58:22 compute-0 ceph-osd[85864]: osd.0 0 load_pgs opened 0 pgs
Jan 31 07:58:22 compute-0 ceph-osd[85864]: osd.0 0 log_to_monitors true
Jan 31 07:58:22 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0[85860]: 2026-01-31T07:58:22.010+0000 7f1805be68c0 -1 osd.0 0 log_to_monitors true
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 31 07:58:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 07:58:22 compute-0 podman[86375]: 2026-01-31 07:58:22.034453687 +0000 UTC m=+0.037107150 container create cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 07:58:22 compute-0 systemd[1]: Started libpod-conmon-cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144.scope.
Jan 31 07:58:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:22 compute-0 podman[86375]: 2026-01-31 07:58:22.111249472 +0000 UTC m=+0.113902975 container init cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:22 compute-0 podman[86375]: 2026-01-31 07:58:22.01698865 +0000 UTC m=+0.019642123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:22 compute-0 podman[86375]: 2026-01-31 07:58:22.116309527 +0000 UTC m=+0.118962990 container start cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_volhard, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 07:58:22 compute-0 podman[86375]: 2026-01-31 07:58:22.121233709 +0000 UTC m=+0.123887222 container attach cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_volhard, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:58:22 compute-0 mystifying_volhard[86424]: 167 167
Jan 31 07:58:22 compute-0 systemd[1]: libpod-cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144.scope: Deactivated successfully.
Jan 31 07:58:22 compute-0 podman[86375]: 2026-01-31 07:58:22.124102677 +0000 UTC m=+0.126756180 container died cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 07:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98564a0d2cc23a1e11c4de8d1fe92bab4f0f1b31836244a64a5710bb3c3e7b5-merged.mount: Deactivated successfully.
Jan 31 07:58:22 compute-0 podman[86375]: 2026-01-31 07:58:22.169700866 +0000 UTC m=+0.172354339 container remove cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:22 compute-0 systemd[1]: libpod-conmon-cdaa1619e81f68a85547c7c932d54d4fbcf42ba5a7a8eb914434efbd6847a144.scope: Deactivated successfully.
Jan 31 07:58:22 compute-0 podman[86456]: 2026-01-31 07:58:22.354810774 +0000 UTC m=+0.040792942 container create a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:58:22 compute-0 systemd[1]: Started libpod-conmon-a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af.scope.
Jan 31 07:58:22 compute-0 podman[86456]: 2026-01-31 07:58:22.335815581 +0000 UTC m=+0.021797739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696f23dec95a5962b3b959993a28fcce2a268470f386b4e06fd89be2422ef460/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696f23dec95a5962b3b959993a28fcce2a268470f386b4e06fd89be2422ef460/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696f23dec95a5962b3b959993a28fcce2a268470f386b4e06fd89be2422ef460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696f23dec95a5962b3b959993a28fcce2a268470f386b4e06fd89be2422ef460/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696f23dec95a5962b3b959993a28fcce2a268470f386b4e06fd89be2422ef460/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:22 compute-0 podman[86456]: 2026-01-31 07:58:22.47953239 +0000 UTC m=+0.165514548 container init a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 07:58:22 compute-0 podman[86456]: 2026-01-31 07:58:22.488920058 +0000 UTC m=+0.174902196 container start a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 07:58:22 compute-0 podman[86456]: 2026-01-31 07:58:22.495773467 +0000 UTC m=+0.181755615 container attach a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 07:58:22 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test[86472]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 07:58:22 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test[86472]:                             [--no-systemd] [--no-tmpfs]
Jan 31 07:58:22 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test[86472]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 07:58:22 compute-0 systemd[1]: libpod-a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af.scope: Deactivated successfully.
Jan 31 07:58:22 compute-0 podman[86456]: 2026-01-31 07:58:22.662815252 +0000 UTC m=+0.348797430 container died a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-696f23dec95a5962b3b959993a28fcce2a268470f386b4e06fd89be2422ef460-merged.mount: Deactivated successfully.
Jan 31 07:58:22 compute-0 podman[86456]: 2026-01-31 07:58:22.713679712 +0000 UTC m=+0.399661890 container remove a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 07:58:22 compute-0 systemd[1]: libpod-conmon-a457dbc1565847429080d5b1996e8fe05d8a14897cf51b914c99fbc133c038af.scope: Deactivated successfully.
Jan 31 07:58:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:22 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:22 compute-0 ceph-mon[75294]: Deploying daemon osd.1 on compute-0
Jan 31 07:58:22 compute-0 ceph-mon[75294]: from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 07:58:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 31 07:58:22 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 07:58:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:22 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:22 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:22 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:22 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:22 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:22 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:22 compute-0 systemd[1]: Reloading.
Jan 31 07:58:23 compute-0 systemd-sysv-generator[86538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:58:23 compute-0 systemd-rc-local-generator[86532]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:58:23 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 07:58:23 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 07:58:23 compute-0 systemd[1]: Reloading.
Jan 31 07:58:23 compute-0 systemd-rc-local-generator[86577]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:58:23 compute-0 systemd-sysv-generator[86580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:58:23 compute-0 systemd[1]: Starting Ceph osd.1 for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:58:23 compute-0 podman[86635]: 2026-01-31 07:58:23.601327791 +0000 UTC m=+0.038489781 container create ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 07:58:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683a2aef716e05227fb2e81947b524416dcbe8b285e347a46c5d57264d9c765a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683a2aef716e05227fb2e81947b524416dcbe8b285e347a46c5d57264d9c765a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683a2aef716e05227fb2e81947b524416dcbe8b285e347a46c5d57264d9c765a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683a2aef716e05227fb2e81947b524416dcbe8b285e347a46c5d57264d9c765a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683a2aef716e05227fb2e81947b524416dcbe8b285e347a46c5d57264d9c765a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:23 compute-0 podman[86635]: 2026-01-31 07:58:23.67527326 +0000 UTC m=+0.112435250 container init ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 07:58:23 compute-0 podman[86635]: 2026-01-31 07:58:23.583973619 +0000 UTC m=+0.021135629 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:23 compute-0 podman[86635]: 2026-01-31 07:58:23.680323814 +0000 UTC m=+0.117485804 container start ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:23 compute-0 podman[86635]: 2026-01-31 07:58:23.684455061 +0000 UTC m=+0.121617071 container attach ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 07:58:23 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:23 compute-0 bash[86635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 07:58:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:23 compute-0 ceph-mon[75294]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:23 compute-0 ceph-mon[75294]: from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 07:58:23 compute-0 ceph-mon[75294]: osdmap e7: 3 total, 0 up, 3 in
Jan 31 07:58:23 compute-0 ceph-mon[75294]: from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 07:58:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:23 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:23 compute-0 bash[86635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:23 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:58:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 31 07:58:23 compute-0 ceph-osd[85864]: osd.0 0 done with init, starting boot process
Jan 31 07:58:23 compute-0 ceph-osd[85864]: osd.0 0 start_boot
Jan 31 07:58:23 compute-0 ceph-osd[85864]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 07:58:23 compute-0 ceph-osd[85864]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 07:58:23 compute-0 ceph-osd[85864]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 07:58:23 compute-0 ceph-osd[85864]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 07:58:23 compute-0 ceph-osd[85864]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 07:58:23 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 31 07:58:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:23 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:23 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:23 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:23 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:23 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:23 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:23 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4143298534; not ready for session (expect reconnect)
Jan 31 07:58:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:23 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:23 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:24 compute-0 lvm[86734]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:24 compute-0 lvm[86734]: VG ceph_vg0 finished
Jan 31 07:58:24 compute-0 lvm[86737]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:58:24 compute-0 lvm[86737]: VG ceph_vg1 finished
Jan 31 07:58:24 compute-0 lvm[86739]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:58:24 compute-0 lvm[86739]: VG ceph_vg2 finished
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:24 compute-0 bash[86635]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 07:58:24 compute-0 bash[86635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 07:58:24 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate[86650]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 07:58:24 compute-0 bash[86635]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 07:58:24 compute-0 systemd[1]: libpod-ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318.scope: Deactivated successfully.
Jan 31 07:58:24 compute-0 podman[86635]: 2026-01-31 07:58:24.718307296 +0000 UTC m=+1.155469296 container died ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:24 compute-0 systemd[1]: libpod-ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318.scope: Consumed 1.242s CPU time.
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-683a2aef716e05227fb2e81947b524416dcbe8b285e347a46c5d57264d9c765a-merged.mount: Deactivated successfully.
Jan 31 07:58:24 compute-0 podman[86635]: 2026-01-31 07:58:24.87199628 +0000 UTC m=+1.309158250 container remove ae85f530771c0c229d22491e85fe073e02918c50a29cc51250b67569d5943318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4143298534; not ready for session (expect reconnect)
Jan 31 07:58:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:24 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:24 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:24 compute-0 ceph-mon[75294]: from='osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:58:24 compute-0 ceph-mon[75294]: osdmap e8: 3 total, 0 up, 3 in
Jan 31 07:58:24 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:24 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:24 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:24 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:25 compute-0 podman[86910]: 2026-01-31 07:58:25.099939262 +0000 UTC m=+0.048621972 container create b3da993d541daaa41068ccfd96846d680ded2e8febf609ac5ba50e59cab466a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afc4e3f963d5b7d99053e2f1b907e6b786c4e44791c312b6f42b4d6f74b7de6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afc4e3f963d5b7d99053e2f1b907e6b786c4e44791c312b6f42b4d6f74b7de6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afc4e3f963d5b7d99053e2f1b907e6b786c4e44791c312b6f42b4d6f74b7de6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afc4e3f963d5b7d99053e2f1b907e6b786c4e44791c312b6f42b4d6f74b7de6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2afc4e3f963d5b7d99053e2f1b907e6b786c4e44791c312b6f42b4d6f74b7de6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 podman[86910]: 2026-01-31 07:58:25.07315434 +0000 UTC m=+0.021837130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:25 compute-0 podman[86910]: 2026-01-31 07:58:25.189943454 +0000 UTC m=+0.138626224 container init b3da993d541daaa41068ccfd96846d680ded2e8febf609ac5ba50e59cab466a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:25 compute-0 podman[86910]: 2026-01-31 07:58:25.198532747 +0000 UTC m=+0.147215467 container start b3da993d541daaa41068ccfd96846d680ded2e8febf609ac5ba50e59cab466a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:25 compute-0 bash[86910]: b3da993d541daaa41068ccfd96846d680ded2e8febf609ac5ba50e59cab466a6
Jan 31 07:58:25 compute-0 systemd[1]: Started Ceph osd.1 for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:58:25 compute-0 ceph-osd[86929]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: pidfile_write: ignore empty --pid-file
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 sudo[85911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 31 07:58:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 07:58:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:58:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:25 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 31 07:58:25 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec400 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2ec000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 sudo[86949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:25 compute-0 ceph-osd[86929]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 31 07:58:25 compute-0 sudo[86949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:25 compute-0 sudo[86949]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:25 compute-0 ceph-osd[86929]: load: jerasure load: lrc 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 sudo[86982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:58:25 compute-0 sudo[86982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 sudo[87049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaqqyykycqojdctfeglqcpgujxlgcfry ; /usr/bin/python3'
Jan 31 07:58:25 compute-0 sudo[87049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5a2edc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount shared_bdev_used = 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Git sha 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: DB SUMMARY
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: DB Session ID:  QTIZ667UN2OEPNVAK85X
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                     Options.env: 0x556c5a17dea0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                Options.info_log: 0x556c5b1ce8a0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.write_buffer_manager: 0x556c5a1e2b40
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.row_cache: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                              Options.wal_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.wal_compression: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Compression algorithms supported:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kZSTD supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a181a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a181a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a181a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 98883d1d-0125-43a6-8497-d7edeea41eba
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846305609574, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846305610774, "job": 1, "event": "recovery_finished"}
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: freelist init
Jan 31 07:58:25 compute-0 ceph-osd[86929]: freelist _read_cfg
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs umount
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bdev(0x556c5af83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluefs mount shared_bdev_used = 27262976
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Git sha 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: DB SUMMARY
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: DB Session ID:  QTIZ667UN2OEPNVAK85W
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                     Options.env: 0x556c5a17dce0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                Options.info_log: 0x556c5b1cea20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.write_buffer_manager: 0x556c5a1e2b40
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.row_cache: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                              Options.wal_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.wal_compression: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Compression algorithms supported:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kZSTD supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cebc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cebc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cebc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cebc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cebc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cebc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cebc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a1818d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cf0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a181a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cf0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a181a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c5b1cf0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556c5a181a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 98883d1d-0125-43a6-8497-d7edeea41eba
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846305659364, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846305689708, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "98883d1d-0125-43a6-8497-d7edeea41eba", "db_session_id": "QTIZ667UN2OEPNVAK85W", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846305711141, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "98883d1d-0125-43a6-8497-d7edeea41eba", "db_session_id": "QTIZ667UN2OEPNVAK85W", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846305714856, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "98883d1d-0125-43a6-8497-d7edeea41eba", "db_session_id": "QTIZ667UN2OEPNVAK85W", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846305716193, "job": 1, "event": "recovery_finished"}
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 07:58:25 compute-0 python3[87051]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:58:25 compute-0 podman[87439]: 2026-01-31 07:58:25.823201829 +0000 UTC m=+0.057386762 container create fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1 (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556c5b3e8000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: DB pointer 0x556c5b388000
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 31 07:58:25 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:58:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:58:25 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 07:58:25 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 07:58:25 compute-0 ceph-osd[86929]: _get_class not permitted to load lua
Jan 31 07:58:25 compute-0 ceph-osd[86929]: _get_class not permitted to load sdk
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1 0 load_pgs
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1 0 load_pgs opened 0 pgs
Jan 31 07:58:25 compute-0 ceph-osd[86929]: osd.1 0 log_to_monitors true
Jan 31 07:58:25 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1[86925]: 2026-01-31T07:58:25.830+0000 7f41392658c0 -1 osd.1 0 log_to_monitors true
Jan 31 07:58:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 31 07:58:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 07:58:25 compute-0 systemd[1]: Started libpod-conmon-fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1.scope.
Jan 31 07:58:25 compute-0 podman[87439]: 2026-01-31 07:58:25.783587504 +0000 UTC m=+0.017772467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:58:25 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4143298534; not ready for session (expect reconnect)
Jan 31 07:58:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:25 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa88abd115a6c97819731d41f49b6e2ff9d2a5813c5b7b4f4b7d4f21243b9c80/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa88abd115a6c97819731d41f49b6e2ff9d2a5813c5b7b4f4b7d4f21243b9c80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa88abd115a6c97819731d41f49b6e2ff9d2a5813c5b7b4f4b7d4f21243b9c80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:25 compute-0 ceph-mon[75294]: purged_snaps scrub starts
Jan 31 07:58:25 compute-0 ceph-mon[75294]: purged_snaps scrub ok
Jan 31 07:58:25 compute-0 ceph-mon[75294]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 07:58:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:25 compute-0 ceph-mon[75294]: from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 07:58:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:25 compute-0 podman[87512]: 2026-01-31 07:58:25.962359087 +0000 UTC m=+0.092742946 container create 98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:25 compute-0 podman[87512]: 2026-01-31 07:58:25.894265349 +0000 UTC m=+0.024649218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:26 compute-0 systemd[1]: Started libpod-conmon-98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82.scope.
Jan 31 07:58:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:26 compute-0 podman[87439]: 2026-01-31 07:58:26.02144782 +0000 UTC m=+0.255632753 container init fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1 (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:58:26 compute-0 podman[87439]: 2026-01-31 07:58:26.026230197 +0000 UTC m=+0.260415110 container start fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1 (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 07:58:26 compute-0 podman[87512]: 2026-01-31 07:58:26.080995117 +0000 UTC m=+0.211379016 container init 98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_feistel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:58:26 compute-0 podman[87512]: 2026-01-31 07:58:26.086818335 +0000 UTC m=+0.217202204 container start 98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:58:26 compute-0 trusting_feistel[87529]: 167 167
Jan 31 07:58:26 compute-0 systemd[1]: libpod-98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82.scope: Deactivated successfully.
Jan 31 07:58:26 compute-0 podman[87512]: 2026-01-31 07:58:26.109028797 +0000 UTC m=+0.239412666 container attach 98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_feistel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:26 compute-0 podman[87512]: 2026-01-31 07:58:26.109924624 +0000 UTC m=+0.240308493 container died 98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 07:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-56258ff80b6a3909ecf8c8ed6b77e474bca76540d4e84fa166f2e81a33d963f0-merged.mount: Deactivated successfully.
Jan 31 07:58:26 compute-0 podman[87512]: 2026-01-31 07:58:26.246815613 +0000 UTC m=+0.377199492 container remove 98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:26 compute-0 systemd[1]: libpod-conmon-98002bd95ef0608f4c9f4b08ad12cc44c6462cefa146a295e4d67526d6d16e82.scope: Deactivated successfully.
Jan 31 07:58:26 compute-0 podman[87439]: 2026-01-31 07:58:26.280023632 +0000 UTC m=+0.514208585 container attach fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1 (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:26 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:26 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:26 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:26 compute-0 podman[87580]: 2026-01-31 07:58:26.498246616 +0000 UTC m=+0.068721659 container create ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1342538586' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:58:26 compute-0 tender_ptolemy[87520]: 
Jan 31 07:58:26 compute-0 tender_ptolemy[87520]: {"fsid":"dc03f344-536f-5591-add9-31059f42637c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":118,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1769846291,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-31T07:56:24:478364+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T07:57:52.741188+0000","services":{}},"progress_events":{}}
Jan 31 07:58:26 compute-0 systemd[1]: libpod-fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1.scope: Deactivated successfully.
Jan 31 07:58:26 compute-0 podman[87439]: 2026-01-31 07:58:26.538535682 +0000 UTC m=+0.772720635 container died fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1 (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:26 compute-0 podman[87580]: 2026-01-31 07:58:26.464045287 +0000 UTC m=+0.034520380 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:26 compute-0 systemd[1]: Started libpod-conmon-ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4.scope.
Jan 31 07:58:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8d1a8ca702212806ed3e5b1fff02d1de556b1750682141a3e9e579f5274114/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8d1a8ca702212806ed3e5b1fff02d1de556b1750682141a3e9e579f5274114/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8d1a8ca702212806ed3e5b1fff02d1de556b1750682141a3e9e579f5274114/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8d1a8ca702212806ed3e5b1fff02d1de556b1750682141a3e9e579f5274114/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8d1a8ca702212806ed3e5b1fff02d1de556b1750682141a3e9e579f5274114/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa88abd115a6c97819731d41f49b6e2ff9d2a5813c5b7b4f4b7d4f21243b9c80-merged.mount: Deactivated successfully.
Jan 31 07:58:26 compute-0 podman[87580]: 2026-01-31 07:58:26.656022436 +0000 UTC m=+0.226497489 container init ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:26 compute-0 podman[87580]: 2026-01-31 07:58:26.66330742 +0000 UTC m=+0.233782443 container start ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 07:58:26 compute-0 podman[87580]: 2026-01-31 07:58:26.68124078 +0000 UTC m=+0.251715803 container attach ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 07:58:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:26 compute-0 ceph-mgr[75591]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 07:58:26 compute-0 podman[87439]: 2026-01-31 07:58:26.792982358 +0000 UTC m=+1.027167281 container remove fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1 (image=quay.io/ceph/ceph:v20, name=tender_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 31 07:58:26 compute-0 systemd[1]: libpod-conmon-fd01c5a77d503e02d9448a77fd90db27f521c11a2d1a6271824377f6c146c9b1.scope: Deactivated successfully.
Jan 31 07:58:26 compute-0 sudo[87049]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:26 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test[87609]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 07:58:26 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test[87609]:                             [--no-systemd] [--no-tmpfs]
Jan 31 07:58:26 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test[87609]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 07:58:26 compute-0 systemd[1]: libpod-ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4.scope: Deactivated successfully.
Jan 31 07:58:26 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 07:58:26 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 07:58:26 compute-0 podman[87616]: 2026-01-31 07:58:26.889089015 +0000 UTC m=+0.020691976 container died ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 31 07:58:26 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4143298534; not ready for session (expect reconnect)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:26 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:26 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc8d1a8ca702212806ed3e5b1fff02d1de556b1750682141a3e9e579f5274114-merged.mount: Deactivated successfully.
Jan 31 07:58:26 compute-0 ceph-mon[75294]: Deploying daemon osd.2 on compute-0
Jan 31 07:58:26 compute-0 ceph-mon[75294]: from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 07:58:26 compute-0 ceph-mon[75294]: osdmap e9: 3 total, 0 up, 3 in
Jan 31 07:58:26 compute-0 ceph-mon[75294]: from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1342538586' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:58:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:26 compute-0 podman[87616]: 2026-01-31 07:58:26.97952262 +0000 UTC m=+0.111125571 container remove ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 07:58:26 compute-0 systemd[1]: libpod-conmon-ef377e407a062b271c9620758e50d72908caee066609c4dd903460e8534f9fb4.scope: Deactivated successfully.
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 37.366 iops: 9565.764 elapsed_sec: 0.314
Jan 31 07:58:27 compute-0 ceph-osd[85864]: log_channel(cluster) log [WRN] : OSD bench result of 9565.764328 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 0 waiting for initial osdmap
Jan 31 07:58:27 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0[85860]: 2026-01-31T07:58:27.160+0000 7f180237a640 -1 osd.0 0 waiting for initial osdmap
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:58:27 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-0[85860]: 2026-01-31T07:58:27.196+0000 7f17fc96d640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:58:27 compute-0 systemd[1]: Reloading.
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 31 07:58:27 compute-0 systemd-rc-local-generator[87671]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:58:27 compute-0 systemd-sysv-generator[87676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:58:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 07:58:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:58:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 31 07:58:27 compute-0 ceph-osd[86929]: osd.1 0 done with init, starting boot process
Jan 31 07:58:27 compute-0 ceph-osd[86929]: osd.1 0 start_boot
Jan 31 07:58:27 compute-0 ceph-osd[86929]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 07:58:27 compute-0 ceph-osd[86929]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 07:58:27 compute-0 ceph-osd[86929]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 07:58:27 compute-0 ceph-osd[86929]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 07:58:27 compute-0 ceph-osd[86929]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 31 07:58:27 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534] boot
Jan 31 07:58:27 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 31 07:58:27 compute-0 ceph-osd[85864]: osd.0 10 state: booting -> active
Jan 31 07:58:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 07:58:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:27 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:27 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:27 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2379191179; not ready for session (expect reconnect)
Jan 31 07:58:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:27 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:27 compute-0 systemd[1]: Reloading.
Jan 31 07:58:27 compute-0 systemd-rc-local-generator[87718]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:58:27 compute-0 systemd-sysv-generator[87721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:58:27 compute-0 systemd[1]: Starting Ceph osd.2 for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:58:27 compute-0 podman[87774]: 2026-01-31 07:58:27.961708858 +0000 UTC m=+0.072738202 container create f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:27 compute-0 ceph-mon[75294]: pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:58:27 compute-0 ceph-mon[75294]: from='osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:58:27 compute-0 ceph-mon[75294]: osd.0 [v2:192.168.122.100:6802/4143298534,v1:192.168.122.100:6803/4143298534] boot
Jan 31 07:58:27 compute-0 ceph-mon[75294]: osdmap e10: 3 total, 1 up, 3 in
Jan 31 07:58:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 07:58:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:28 compute-0 podman[87774]: 2026-01-31 07:58:27.915645335 +0000 UTC m=+0.026674749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e75f68311557e6ae988e4712437c1cd3be5380e72f0d3ca5f7d3beabda5e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e75f68311557e6ae988e4712437c1cd3be5380e72f0d3ca5f7d3beabda5e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e75f68311557e6ae988e4712437c1cd3be5380e72f0d3ca5f7d3beabda5e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e75f68311557e6ae988e4712437c1cd3be5380e72f0d3ca5f7d3beabda5e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e75f68311557e6ae988e4712437c1cd3be5380e72f0d3ca5f7d3beabda5e17/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:28 compute-0 podman[87774]: 2026-01-31 07:58:28.077925563 +0000 UTC m=+0.188954967 container init f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:58:28 compute-0 podman[87774]: 2026-01-31 07:58:28.084145475 +0000 UTC m=+0.195174829 container start f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:28 compute-0 podman[87774]: 2026-01-31 07:58:28.114576387 +0000 UTC m=+0.225605711 container attach f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2379191179; not ready for session (expect reconnect)
Jan 31 07:58:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:28 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:28 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:28 compute-0 lvm[87875]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:28 compute-0 lvm[87875]: VG ceph_vg0 finished
Jan 31 07:58:28 compute-0 lvm[87876]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:58:28 compute-0 lvm[87876]: VG ceph_vg1 finished
Jan 31 07:58:28 compute-0 lvm[87878]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:58:28 compute-0 lvm[87878]: VG ceph_vg2 finished
Jan 31 07:58:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 31 07:58:28 compute-0 ceph-mgr[75591]: [devicehealth INFO root] creating mgr pool
Jan 31 07:58:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 31 07:58:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 bash[87774]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:58:28 compute-0 bash[87774]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 07:58:28 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate[87790]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 07:58:28 compute-0 bash[87774]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:58:29 compute-0 systemd[1]: libpod-f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1.scope: Deactivated successfully.
Jan 31 07:58:29 compute-0 podman[87774]: 2026-01-31 07:58:29.019024192 +0000 UTC m=+1.130053506 container died f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 07:58:29 compute-0 systemd[1]: libpod-f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1.scope: Consumed 1.189s CPU time.
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:58:29 compute-0 ceph-mon[75294]: purged_snaps scrub starts
Jan 31 07:58:29 compute-0 ceph-mon[75294]: purged_snaps scrub ok
Jan 31 07:58:29 compute-0 ceph-mon[75294]: OSD bench result of 9565.764328 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:58:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:29 compute-0 ceph-mon[75294]: pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 31 07:58:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:58:29 compute-0 ceph-osd[85864]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 07:58:29 compute-0 ceph-osd[85864]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 07:58:29 compute-0 ceph-osd[85864]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:29 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:29 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 07:58:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9e75f68311557e6ae988e4712437c1cd3be5380e72f0d3ca5f7d3beabda5e17-merged.mount: Deactivated successfully.
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:29 compute-0 podman[87774]: 2026-01-31 07:58:29.222213486 +0000 UTC m=+1.333243060 container remove f743a6b96de788528e889a693d696dba39b27e2aa08e8f8a5bfb1aae5ca865f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2-activate, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.lhuavc(active, since 98s)
Jan 31 07:58:29 compute-0 podman[88041]: 2026-01-31 07:58:29.410084609 +0000 UTC m=+0.076541690 container create f5583687da902203ee10cd59f3bed6eb2d27a9ffdabce9c9bd49464aa5842b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 07:58:29 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2379191179; not ready for session (expect reconnect)
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:29 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:29 compute-0 podman[88041]: 2026-01-31 07:58:29.367611145 +0000 UTC m=+0.034068296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c7d695a6cae95b84183aeb5a4ce4c37ffce4f0465209f73449c58896725c27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c7d695a6cae95b84183aeb5a4ce4c37ffce4f0465209f73449c58896725c27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c7d695a6cae95b84183aeb5a4ce4c37ffce4f0465209f73449c58896725c27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c7d695a6cae95b84183aeb5a4ce4c37ffce4f0465209f73449c58896725c27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c7d695a6cae95b84183aeb5a4ce4c37ffce4f0465209f73449c58896725c27/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:29 compute-0 podman[88041]: 2026-01-31 07:58:29.564218077 +0000 UTC m=+0.230675178 container init f5583687da902203ee10cd59f3bed6eb2d27a9ffdabce9c9bd49464aa5842b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:29 compute-0 podman[88041]: 2026-01-31 07:58:29.572404408 +0000 UTC m=+0.238861529 container start f5583687da902203ee10cd59f3bed6eb2d27a9ffdabce9c9bd49464aa5842b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 31 07:58:29 compute-0 bash[88041]: f5583687da902203ee10cd59f3bed6eb2d27a9ffdabce9c9bd49464aa5842b95
Jan 31 07:58:29 compute-0 ceph-osd[88061]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: pidfile_write: ignore empty --pid-file
Jan 31 07:58:29 compute-0 systemd[1]: Started Ceph osd.2 for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 sudo[86982]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06400 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c06000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 31 07:58:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: load: jerasure load: lrc 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 07:58:29 compute-0 ceph-osd[88061]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 sudo[88090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:29 compute-0 sudo[88090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-0 sudo[88090]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 sudo[88129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c329c07c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 sudo[88129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount shared_bdev_used = 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Git sha 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: DB SUMMARY
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: DB Session ID:  G9JPKFUXF8PFAHDS9BCB
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                     Options.env: 0x55c329a97ea0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                Options.info_log: 0x55c32aae88a0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.write_buffer_manager: 0x55c329afcb40
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.row_cache: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                              Options.wal_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.wal_compression: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Compression algorithms supported:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kZSTD supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: aba24820-69d1-4c03-8d0a-6fb41411e9eb
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846309970283, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846309971826, "job": 1, "event": "recovery_finished"}
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: freelist init
Jan 31 07:58:29 compute-0 ceph-osd[88061]: freelist _read_cfg
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs umount
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bdev(0x55c32a89d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluefs mount shared_bdev_used = 27262976
Jan 31 07:58:29 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Git sha 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: DB SUMMARY
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: DB Session ID:  G9JPKFUXF8PFAHDS9BCA
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                     Options.env: 0x55c32a8e3dc0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                Options.info_log: 0x55c32aae8a20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.write_buffer_manager: 0x55c329afd900
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.row_cache: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                              Options.wal_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.wal_compression: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Compression algorithms supported:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kZSTD supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:29 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:           Options.merge_operator: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c32aae9d40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c329a9b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.compression: LZ4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.num_levels: 7
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: aba24820-69d1-4c03-8d0a-6fb41411e9eb
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846310008294, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846310020752, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846310, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "aba24820-69d1-4c03-8d0a-6fb41411e9eb", "db_session_id": "G9JPKFUXF8PFAHDS9BCA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846310051285, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846310, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "aba24820-69d1-4c03-8d0a-6fb41411e9eb", "db_session_id": "G9JPKFUXF8PFAHDS9BCA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 07:58:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 07:58:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846310081486, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846310, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "aba24820-69d1-4c03-8d0a-6fb41411e9eb", "db_session_id": "G9JPKFUXF8PFAHDS9BCA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:30 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846310105739, "job": 1, "event": "recovery_finished"}
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 07:58:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:30 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:30 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 07:58:30 compute-0 ceph-mon[75294]: osdmap e11: 3 total, 1 up, 3 in
Jan 31 07:58:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 07:58:30 compute-0 ceph-mon[75294]: mgrmap e11: compute-0.lhuavc(active, since 98s)
Jan 31 07:58:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c32ad02000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: DB pointer 0x55c32aca2000
Jan 31 07:58:30 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:58:30 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 31 07:58:30 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:58:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:58:30 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 07:58:30 compute-0 podman[88539]: 2026-01-31 07:58:30.259567466 +0000 UTC m=+0.072703191 container create 6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:30 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 07:58:30 compute-0 ceph-osd[88061]: _get_class not permitted to load lua
Jan 31 07:58:30 compute-0 ceph-osd[88061]: _get_class not permitted to load sdk
Jan 31 07:58:30 compute-0 ceph-osd[88061]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 07:58:30 compute-0 ceph-osd[88061]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 07:58:30 compute-0 ceph-osd[88061]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 07:58:30 compute-0 ceph-osd[88061]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 07:58:30 compute-0 ceph-osd[88061]: osd.2 0 load_pgs
Jan 31 07:58:30 compute-0 ceph-osd[88061]: osd.2 0 load_pgs opened 0 pgs
Jan 31 07:58:30 compute-0 ceph-osd[88061]: osd.2 0 log_to_monitors true
Jan 31 07:58:30 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2[88057]: 2026-01-31T07:58:30.268+0000 7f54cc2258c0 -1 osd.2 0 log_to_monitors true
Jan 31 07:58:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 31 07:58:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 07:58:30 compute-0 podman[88539]: 2026-01-31 07:58:30.212738 +0000 UTC m=+0.025873795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:30 compute-0 systemd[1]: Started libpod-conmon-6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b.scope.
Jan 31 07:58:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:30 compute-0 podman[88539]: 2026-01-31 07:58:30.399241481 +0000 UTC m=+0.212377296 container init 6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:30 compute-0 podman[88539]: 2026-01-31 07:58:30.410839567 +0000 UTC m=+0.223975322 container start 6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:30 compute-0 trusting_engelbart[88589]: 167 167
Jan 31 07:58:30 compute-0 systemd[1]: libpod-6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b.scope: Deactivated successfully.
Jan 31 07:58:30 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2379191179; not ready for session (expect reconnect)
Jan 31 07:58:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:30 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:30 compute-0 podman[88539]: 2026-01-31 07:58:30.44680955 +0000 UTC m=+0.259945305 container attach 6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 07:58:30 compute-0 podman[88539]: 2026-01-31 07:58:30.447373908 +0000 UTC m=+0.260509663 container died 6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfffb64eab14c8bdebf74e699c6e21a98c4bb8f4c8fa4aa01fcdd3d627b59797-merged.mount: Deactivated successfully.
Jan 31 07:58:30 compute-0 podman[88539]: 2026-01-31 07:58:30.696319324 +0000 UTC m=+0.509455069 container remove 6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 07:58:30 compute-0 systemd[1]: libpod-conmon-6a201a7d3c2959c87f15fe62248537c6f3733237c9ea09fa0a42bf8bae3e877b.scope: Deactivated successfully.
Jan 31 07:58:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 31 07:58:30 compute-0 podman[88615]: 2026-01-31 07:58:30.904845431 +0000 UTC m=+0.070574346 container create 2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:58:30 compute-0 podman[88615]: 2026-01-31 07:58:30.867334 +0000 UTC m=+0.033062935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:30 compute-0 systemd[1]: Started libpod-conmon-2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b.scope.
Jan 31 07:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17239de8fc8b20b337aa0f56174c1c257078637a5dc90957f3cdd4a0bb9ba3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17239de8fc8b20b337aa0f56174c1c257078637a5dc90957f3cdd4a0bb9ba3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17239de8fc8b20b337aa0f56174c1c257078637a5dc90957f3cdd4a0bb9ba3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17239de8fc8b20b337aa0f56174c1c257078637a5dc90957f3cdd4a0bb9ba3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 podman[88615]: 2026-01-31 07:58:31.066425718 +0000 UTC m=+0.232154663 container init 2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_ellis, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:58:31 compute-0 podman[88615]: 2026-01-31 07:58:31.075898838 +0000 UTC m=+0.241627783 container start 2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:58:31 compute-0 podman[88615]: 2026-01-31 07:58:31.089068562 +0000 UTC m=+0.254797487 container attach 2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_ellis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 07:58:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 07:58:31 compute-0 ceph-mon[75294]: osdmap e12: 3 total, 1 up, 3 in
Jan 31 07:58:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:31 compute-0 ceph-mon[75294]: from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 07:58:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:31 compute-0 ceph-mon[75294]: pgmap v53: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 31 07:58:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 07:58:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 31 07:58:31 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 31 07:58:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 07:58:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 07:58:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 07:58:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:31 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:31 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:31 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:31 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:31 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 07:58:31 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 07:58:31 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2379191179; not ready for session (expect reconnect)
Jan 31 07:58:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:31 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:31 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:58:31 compute-0 lvm[88707]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:31 compute-0 lvm[88707]: VG ceph_vg0 finished
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.963 iops: 5878.491 elapsed_sec: 0.510
Jan 31 07:58:31 compute-0 ceph-osd[86929]: log_channel(cluster) log [WRN] : OSD bench result of 5878.491406 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 0 waiting for initial osdmap
Jan 31 07:58:31 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1[86925]: 2026-01-31T07:58:31.618+0000 7f41359f9640 -1 osd.1 0 waiting for initial osdmap
Jan 31 07:58:31 compute-0 lvm[88709]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:58:31 compute-0 lvm[88709]: VG ceph_vg1 finished
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 07:58:31 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-1[86925]: 2026-01-31T07:58:31.652+0000 7f412ffec640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 31 07:58:31 compute-0 ceph-osd[86929]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 31 07:58:31 compute-0 lvm[88710]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:58:31 compute-0 lvm[88710]: VG ceph_vg2 finished
Jan 31 07:58:31 compute-0 loving_ellis[88632]: {}
Jan 31 07:58:31 compute-0 systemd[1]: libpod-2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b.scope: Deactivated successfully.
Jan 31 07:58:31 compute-0 podman[88615]: 2026-01-31 07:58:31.787825527 +0000 UTC m=+0.953554442 container died 2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_ellis, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17239de8fc8b20b337aa0f56174c1c257078637a5dc90957f3cdd4a0bb9ba3c-merged.mount: Deactivated successfully.
Jan 31 07:58:32 compute-0 podman[88615]: 2026-01-31 07:58:32.17227415 +0000 UTC m=+1.338003085 container remove 2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:58:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 31 07:58:32 compute-0 ceph-osd[88061]: osd.2 0 done with init, starting boot process
Jan 31 07:58:32 compute-0 ceph-osd[88061]: osd.2 0 start_boot
Jan 31 07:58:32 compute-0 ceph-osd[88061]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 07:58:32 compute-0 ceph-osd[88061]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 07:58:32 compute-0 ceph-osd[88061]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 07:58:32 compute-0 ceph-osd[88061]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 07:58:32 compute-0 ceph-osd[88061]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179] boot
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 31 07:58:32 compute-0 sudo[88129]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:32 compute-0 ceph-osd[86929]: osd.1 14 state: booting -> active
Jan 31 07:58:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:58:32 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:32 compute-0 ceph-mon[75294]: from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 07:58:32 compute-0 ceph-mon[75294]: osdmap e13: 3 total, 1 up, 3 in
Jan 31 07:58:32 compute-0 ceph-mon[75294]: from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 07:58:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:32 compute-0 ceph-mon[75294]: OSD bench result of 5878.491406 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:58:32 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1345003892; not ready for session (expect reconnect)
Jan 31 07:58:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:32 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:32 compute-0 systemd[1]: libpod-conmon-2154835022e6774976144b8d926da54b24032686b58d098c917e487176347c0b.scope: Deactivated successfully.
Jan 31 07:58:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:32 compute-0 sudo[88726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:58:32 compute-0 sudo[88726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:32 compute-0 sudo[88726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:32 compute-0 sudo[88751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:32 compute-0 sudo[88751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:32 compute-0 sudo[88751]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:32 compute-0 sudo[88776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 07:58:32 compute-0 sudo[88776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 07:58:32 compute-0 podman[88845]: 2026-01-31 07:58:32.872009255 +0000 UTC m=+0.094609473 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:33 compute-0 podman[88866]: 2026-01-31 07:58:33.145851416 +0000 UTC m=+0.188070721 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:58:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 07:58:33 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1345003892; not ready for session (expect reconnect)
Jan 31 07:58:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:33 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:33 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 31 07:58:33 compute-0 podman[88845]: 2026-01-31 07:58:33.411702831 +0000 UTC m=+0.634303099 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 07:58:33 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 31 07:58:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:33 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:33 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:58:33 compute-0 ceph-mon[75294]: from='osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:58:33 compute-0 ceph-mon[75294]: osd.1 [v2:192.168.122.100:6806/2379191179,v1:192.168.122.100:6807/2379191179] boot
Jan 31 07:58:33 compute-0 ceph-mon[75294]: osdmap e14: 3 total, 2 up, 3 in
Jan 31 07:58:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 07:58:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:33 compute-0 ceph-mon[75294]: pgmap v56: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 07:58:33 compute-0 sudo[88776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:34 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1345003892; not ready for session (expect reconnect)
Jan 31 07:58:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:34 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:34 compute-0 sudo[88995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:34 compute-0 sudo[88995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:34 compute-0 sudo[88995]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:34 compute-0 sudo[89020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 07:58:34 compute-0 sudo[89020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 07:58:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 31 07:58:34 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 31 07:58:34 compute-0 sudo[89020]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:34 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:34 compute-0 ceph-mon[75294]: purged_snaps scrub starts
Jan 31 07:58:34 compute-0 ceph-mon[75294]: purged_snaps scrub ok
Jan 31 07:58:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:34 compute-0 ceph-mon[75294]: osdmap e15: 3 total, 2 up, 3 in
Jan 31 07:58:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:34 compute-0 sudo[89077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:34 compute-0 sudo[89077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:34 compute-0 sudo[89077]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 07:58:34 compute-0 sudo[89102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:58:34 compute-0 sudo[89102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:35 compute-0 podman[89138]: 2026-01-31 07:58:35.014033493 +0000 UTC m=+0.051913093 container create 7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 07:58:35 compute-0 podman[89138]: 2026-01-31 07:58:34.978590706 +0000 UTC m=+0.016470326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:35 compute-0 systemd[1]: Started libpod-conmon-7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f.scope.
Jan 31 07:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:35 compute-0 podman[89138]: 2026-01-31 07:58:35.169784981 +0000 UTC m=+0.207664581 container init 7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elbakyan, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 07:58:35 compute-0 podman[89138]: 2026-01-31 07:58:35.174800895 +0000 UTC m=+0.212680505 container start 7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 07:58:35 compute-0 magical_elbakyan[89155]: 167 167
Jan 31 07:58:35 compute-0 systemd[1]: libpod-7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f.scope: Deactivated successfully.
Jan 31 07:58:35 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1345003892; not ready for session (expect reconnect)
Jan 31 07:58:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:35 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:35 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:35 compute-0 podman[89138]: 2026-01-31 07:58:35.238367964 +0000 UTC m=+0.276247764 container attach 7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:35 compute-0 podman[89138]: 2026-01-31 07:58:35.23917338 +0000 UTC m=+0.277052990 container died 7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-38b71316732f2ffb6665b811ffcbd51c55ce5b568ba3ade626d48d89b4afcdab-merged.mount: Deactivated successfully.
Jan 31 07:58:35 compute-0 podman[89138]: 2026-01-31 07:58:35.470071453 +0000 UTC m=+0.507951053 container remove 7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elbakyan, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 07:58:35 compute-0 systemd[1]: libpod-conmon-7d1696c5c1b87ee61e5ab6b430835f99261f156874abb636a1d949a626e04d6f.scope: Deactivated successfully.
Jan 31 07:58:35 compute-0 podman[89180]: 2026-01-31 07:58:35.626756479 +0000 UTC m=+0.089157056 container create bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 07:58:35 compute-0 podman[89180]: 2026-01-31 07:58:35.556025189 +0000 UTC m=+0.018425786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:35 compute-0 ceph-mon[75294]: osdmap e16: 3 total, 2 up, 3 in
Jan 31 07:58:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:35 compute-0 ceph-mon[75294]: pgmap v59: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 07:58:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:35 compute-0 systemd[1]: Started libpod-conmon-bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e.scope.
Jan 31 07:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339abcc6531bc646abaaf18ed21af58c5c3d7983e4ee79a5a3e2a850fee6ec9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339abcc6531bc646abaaf18ed21af58c5c3d7983e4ee79a5a3e2a850fee6ec9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339abcc6531bc646abaaf18ed21af58c5c3d7983e4ee79a5a3e2a850fee6ec9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339abcc6531bc646abaaf18ed21af58c5c3d7983e4ee79a5a3e2a850fee6ec9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 podman[89180]: 2026-01-31 07:58:35.910371658 +0000 UTC m=+0.372772245 container init bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldstine, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:58:35 compute-0 podman[89180]: 2026-01-31 07:58:35.916394544 +0000 UTC m=+0.378795121 container start bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldstine, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:36 compute-0 podman[89180]: 2026-01-31 07:58:36.062566427 +0000 UTC m=+0.524967104 container attach bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldstine, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1345003892; not ready for session (expect reconnect)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]: [
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:     {
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "available": false,
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "being_replaced": false,
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "ceph_device_lvm": false,
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "lsm_data": {},
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "lvs": [],
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "path": "/dev/sr0",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "rejected_reasons": [
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "Insufficient space (<5GB)",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "Has a FileSystem"
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         ],
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         "sys_api": {
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "actuators": null,
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "device_nodes": [
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:                 "sr0"
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             ],
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "devname": "sr0",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "human_readable_size": "482.00 KB",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "id_bus": "ata",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "model": "QEMU DVD-ROM",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "nr_requests": "2",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "parent": "/dev/sr0",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "partitions": {},
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "path": "/dev/sr0",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "removable": "1",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "rev": "2.5+",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "ro": "0",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "rotational": "1",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "sas_address": "",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "sas_device_handle": "",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "scheduler_mode": "mq-deadline",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "sectors": 0,
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "sectorsize": "2048",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "size": 493568.0,
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "support_discard": "2048",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "type": "disk",
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:             "vendor": "QEMU"
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:         }
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]:     }
Jan 31 07:58:36 compute-0 quizzical_goldstine[89196]: ]
Jan 31 07:58:36 compute-0 systemd[1]: libpod-bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e.scope: Deactivated successfully.
Jan 31 07:58:36 compute-0 podman[89180]: 2026-01-31 07:58:36.3267052 +0000 UTC m=+0.789105797 container died bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldstine, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 07:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7339abcc6531bc646abaaf18ed21af58c5c3d7983e4ee79a5a3e2a850fee6ec9-merged.mount: Deactivated successfully.
Jan 31 07:58:36 compute-0 podman[89180]: 2026-01-31 07:58:36.735743538 +0000 UTC m=+1.198144115 container remove bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 07:58:36 compute-0 sudo[89102]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:36 compute-0 systemd[1]: libpod-conmon-bf2e4ef9a8528a2360eb68cbe4c0fb32207216087aa238db9b88599189e20f7e.scope: Deactivated successfully.
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Check health
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 07:58:36 compute-0 sudo[89931]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 31 07:58:36 compute-0 sudo[89931]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 07:58:36 compute-0 sudo[89931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 31 07:58:36 compute-0 sudo[89931]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 07:58:36 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:58:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:58:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:36 compute-0 sudo[89934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:36 compute-0 sudo[89934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:36 compute-0 sudo[89934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:37 compute-0 sudo[89959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 07:58:37 compute-0 sudo[89959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: pgmap v60: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 07:58:37 compute-0 ceph-mon[75294]: Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:37 compute-0 ceph-mgr[75591]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1345003892; not ready for session (expect reconnect)
Jan 31 07:58:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:37 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:37 compute-0 ceph-mgr[75591]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:58:37 compute-0 podman[89995]: 2026-01-31 07:58:37.272953066 +0000 UTC m=+0.051095548 container create 3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:58:37 compute-0 systemd[1]: Started libpod-conmon-3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194.scope.
Jan 31 07:58:37 compute-0 podman[89995]: 2026-01-31 07:58:37.241689847 +0000 UTC m=+0.019832349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:37 compute-0 podman[89995]: 2026-01-31 07:58:37.37671625 +0000 UTC m=+0.154858752 container init 3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_booth, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:37 compute-0 podman[89995]: 2026-01-31 07:58:37.381706322 +0000 UTC m=+0.159848804 container start 3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_booth, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 07:58:37 compute-0 youthful_booth[90011]: 167 167
Jan 31 07:58:37 compute-0 systemd[1]: libpod-3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194.scope: Deactivated successfully.
Jan 31 07:58:37 compute-0 conmon[90011]: conmon 3306a8950f13fb67aeaa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194.scope/container/memory.events
Jan 31 07:58:37 compute-0 podman[89995]: 2026-01-31 07:58:37.398714254 +0000 UTC m=+0.176856746 container attach 3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:37 compute-0 podman[89995]: 2026-01-31 07:58:37.39919944 +0000 UTC m=+0.177341922 container died 3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_booth, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f8c28c5e4138f2d8cfaddc9b63ca8085d3d74f5e52e0d4375bc14b59e516b3-merged.mount: Deactivated successfully.
Jan 31 07:58:37 compute-0 podman[89995]: 2026-01-31 07:58:37.491416378 +0000 UTC m=+0.269558860 container remove 3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_booth, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:58:37 compute-0 systemd[1]: libpod-conmon-3306a8950f13fb67aeaad0429d13d1587b97763474e05ee08f167327827d4194.scope: Deactivated successfully.
Jan 31 07:58:37 compute-0 podman[90035]: 2026-01-31 07:58:37.593280533 +0000 UTC m=+0.036220873 container create 36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 07:58:37 compute-0 systemd[1]: Started libpod-conmon-36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2.scope.
Jan 31 07:58:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4859ee2b315e1b6feace84391c958de088660bdf72b35056505228b2a6706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4859ee2b315e1b6feace84391c958de088660bdf72b35056505228b2a6706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4859ee2b315e1b6feace84391c958de088660bdf72b35056505228b2a6706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4859ee2b315e1b6feace84391c958de088660bdf72b35056505228b2a6706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4859ee2b315e1b6feace84391c958de088660bdf72b35056505228b2a6706/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:37 compute-0 podman[90035]: 2026-01-31 07:58:37.574757275 +0000 UTC m=+0.017697635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:37 compute-0 podman[90035]: 2026-01-31 07:58:37.685423749 +0000 UTC m=+0.128364129 container init 36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hugle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:37 compute-0 podman[90035]: 2026-01-31 07:58:37.690247117 +0000 UTC m=+0.133187457 container start 36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hugle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 07:58:37 compute-0 podman[90035]: 2026-01-31 07:58:37.694476417 +0000 UTC m=+0.137416807 container attach 36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 37.130 iops: 9505.169 elapsed_sec: 0.316
Jan 31 07:58:37 compute-0 ceph-osd[88061]: log_channel(cluster) log [WRN] : OSD bench result of 9505.169073 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:58:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2[88057]: 2026-01-31T07:58:37.724+0000 7f54c89b9640 -1 osd.2 0 waiting for initial osdmap
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 0 waiting for initial osdmap
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 07:58:37 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-osd-2[88057]: 2026-01-31T07:58:37.748+0000 7f54c2fac640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 16 set_numa_affinity not setting numa affinity
Jan 31 07:58:37 compute-0 ceph-osd[88061]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 31 07:58:38 compute-0 great_hugle[90051]: --> passed data devices: 0 physical, 3 LVM
Jan 31 07:58:38 compute-0 great_hugle[90051]: --> All data devices are unavailable
Jan 31 07:58:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 07:58:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 31 07:58:38 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892] boot
Jan 31 07:58:38 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 31 07:58:38 compute-0 systemd[1]: libpod-36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2.scope: Deactivated successfully.
Jan 31 07:58:38 compute-0 podman[90035]: 2026-01-31 07:58:38.098599234 +0000 UTC m=+0.541539574 container died 36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 07:58:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 07:58:38 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:38 compute-0 ceph-osd[88061]: osd.2 17 state: booting -> active
Jan 31 07:58:38 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-85d4859ee2b315e1b6feace84391c958de088660bdf72b35056505228b2a6706-merged.mount: Deactivated successfully.
Jan 31 07:58:38 compute-0 podman[90035]: 2026-01-31 07:58:38.13791894 +0000 UTC m=+0.580859270 container remove 36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hugle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:38 compute-0 systemd[1]: libpod-conmon-36e255a9a533d1a821695e286819db09f6154b8e12407b49d0299c64eee38cc2.scope: Deactivated successfully.
Jan 31 07:58:38 compute-0 sudo[89959]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:38 compute-0 sudo[90083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:38 compute-0 sudo[90083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:38 compute-0 sudo[90083]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:38 compute-0 sudo[90108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 07:58:38 compute-0 sudo[90108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:38 compute-0 podman[90144]: 2026-01-31 07:58:38.49453913 +0000 UTC m=+0.016952751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:38 compute-0 podman[90144]: 2026-01-31 07:58:38.662842042 +0000 UTC m=+0.185255643 container create bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:58:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:38 compute-0 systemd[1]: Started libpod-conmon-bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053.scope.
Jan 31 07:58:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:38 compute-0 podman[90144]: 2026-01-31 07:58:38.99526323 +0000 UTC m=+0.517676891 container init bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:39 compute-0 podman[90144]: 2026-01-31 07:58:39.0027888 +0000 UTC m=+0.525202401 container start bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:39 compute-0 ecstatic_mcclintock[90161]: 167 167
Jan 31 07:58:39 compute-0 systemd[1]: libpod-bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053.scope: Deactivated successfully.
Jan 31 07:58:39 compute-0 podman[90144]: 2026-01-31 07:58:39.077325456 +0000 UTC m=+0.599739057 container attach bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:39 compute-0 podman[90144]: 2026-01-31 07:58:39.077703438 +0000 UTC m=+0.600117069 container died bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:58:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 07:58:39 compute-0 ceph-mon[75294]: OSD bench result of 9505.169073 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:58:39 compute-0 ceph-mon[75294]: osd.2 [v2:192.168.122.100:6810/1345003892,v1:192.168.122.100:6811/1345003892] boot
Jan 31 07:58:39 compute-0 ceph-mon[75294]: osdmap e17: 3 total, 3 up, 3 in
Jan 31 07:58:39 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 07:58:39 compute-0 ceph-mon[75294]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dc3295357307a2c68faca9bd8245d615aae1ee5efebcc7cb3885e27bab05794-merged.mount: Deactivated successfully.
Jan 31 07:58:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 31 07:58:39 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.lhuavc(active, since 109s)
Jan 31 07:58:39 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 31 07:58:39 compute-0 podman[90144]: 2026-01-31 07:58:39.406261577 +0000 UTC m=+0.928675178 container remove bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:39 compute-0 systemd[1]: libpod-conmon-bf5cd8e74ab2d0e837040ed3844a0b743f084054ba8c495e3d6cc61e3ab73053.scope: Deactivated successfully.
Jan 31 07:58:39 compute-0 podman[90186]: 2026-01-31 07:58:39.525305879 +0000 UTC m=+0.043132005 container create 3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:39 compute-0 systemd[1]: Started libpod-conmon-3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0.scope.
Jan 31 07:58:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01294e55bbb8fe420cee3332c541c6efb5f666d97a685a1dc72de171758fba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01294e55bbb8fe420cee3332c541c6efb5f666d97a685a1dc72de171758fba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01294e55bbb8fe420cee3332c541c6efb5f666d97a685a1dc72de171758fba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b01294e55bbb8fe420cee3332c541c6efb5f666d97a685a1dc72de171758fba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:39 compute-0 podman[90186]: 2026-01-31 07:58:39.500327342 +0000 UTC m=+0.018153448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:39 compute-0 podman[90186]: 2026-01-31 07:58:39.609557143 +0000 UTC m=+0.127383269 container init 3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:39 compute-0 podman[90186]: 2026-01-31 07:58:39.615779404 +0000 UTC m=+0.133605510 container start 3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keldysh, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:39 compute-0 podman[90186]: 2026-01-31 07:58:39.623919484 +0000 UTC m=+0.141745610 container attach 3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]: {
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:     "0": [
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:         {
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "devices": [
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "/dev/loop3"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             ],
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_name": "ceph_lv0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_size": "21470642176",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "name": "ceph_lv0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "tags": {
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.crush_device_class": "",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.encrypted": "0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osd_id": "0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.type": "block",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.vdo": "0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.with_tpm": "0"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             },
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "type": "block",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "vg_name": "ceph_vg0"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:         }
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:     ],
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:     "1": [
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:         {
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "devices": [
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "/dev/loop4"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             ],
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_name": "ceph_lv1",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_size": "21470642176",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "name": "ceph_lv1",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "tags": {
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.crush_device_class": "",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.encrypted": "0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osd_id": "1",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.type": "block",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.vdo": "0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.with_tpm": "0"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             },
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "type": "block",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "vg_name": "ceph_vg1"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:         }
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:     ],
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:     "2": [
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:         {
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "devices": [
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "/dev/loop5"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             ],
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_name": "ceph_lv2",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_size": "21470642176",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "name": "ceph_lv2",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "tags": {
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.crush_device_class": "",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.encrypted": "0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osd_id": "2",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.type": "block",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.vdo": "0",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:                 "ceph.with_tpm": "0"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             },
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "type": "block",
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:             "vg_name": "ceph_vg2"
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:         }
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]:     ]
Jan 31 07:58:39 compute-0 dreamy_keldysh[90203]: }
Jan 31 07:58:39 compute-0 systemd[1]: libpod-3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0.scope: Deactivated successfully.
Jan 31 07:58:39 compute-0 podman[90186]: 2026-01-31 07:58:39.90742222 +0000 UTC m=+0.425248316 container died 3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 31 07:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b01294e55bbb8fe420cee3332c541c6efb5f666d97a685a1dc72de171758fba-merged.mount: Deactivated successfully.
Jan 31 07:58:39 compute-0 podman[90186]: 2026-01-31 07:58:39.95042219 +0000 UTC m=+0.468248306 container remove 3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keldysh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:39 compute-0 systemd[1]: libpod-conmon-3cdb564762a154ffe49e752ce3208c2ad6428932a77e9a81e2ec40e8cce37dc0.scope: Deactivated successfully.
Jan 31 07:58:39 compute-0 sudo[90108]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:40 compute-0 sudo[90224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:40 compute-0 sudo[90224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:40 compute-0 sudo[90224]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:40 compute-0 sudo[90249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 07:58:40 compute-0 sudo[90249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:40 compute-0 podman[90287]: 2026-01-31 07:58:40.334161861 +0000 UTC m=+0.033747716 container create beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:40 compute-0 systemd[1]: Started libpod-conmon-beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6.scope.
Jan 31 07:58:40 compute-0 ceph-mon[75294]: mgrmap e12: compute-0.lhuavc(active, since 109s)
Jan 31 07:58:40 compute-0 ceph-mon[75294]: osdmap e18: 3 total, 3 up, 3 in
Jan 31 07:58:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:40 compute-0 podman[90287]: 2026-01-31 07:58:40.318267463 +0000 UTC m=+0.017853338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:40 compute-0 podman[90287]: 2026-01-31 07:58:40.417227509 +0000 UTC m=+0.116813434 container init beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:58:40 compute-0 podman[90287]: 2026-01-31 07:58:40.422324785 +0000 UTC m=+0.121910670 container start beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pike, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:40 compute-0 podman[90287]: 2026-01-31 07:58:40.42636616 +0000 UTC m=+0.125952025 container attach beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:40 compute-0 gallant_pike[90303]: 167 167
Jan 31 07:58:40 compute-0 systemd[1]: libpod-beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6.scope: Deactivated successfully.
Jan 31 07:58:40 compute-0 podman[90287]: 2026-01-31 07:58:40.427590966 +0000 UTC m=+0.127176801 container died beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f8cc5d375c2454d45102cda20749aefa976de02ee387a69acca84b75266da2-merged.mount: Deactivated successfully.
Jan 31 07:58:40 compute-0 podman[90287]: 2026-01-31 07:58:40.466718657 +0000 UTC m=+0.166304502 container remove beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_pike, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 07:58:40 compute-0 systemd[1]: libpod-conmon-beb4836c874a60a467ca26fb610a08eb60808ecef0635256eef96ebc592ee2d6.scope: Deactivated successfully.
Jan 31 07:58:40 compute-0 podman[90327]: 2026-01-31 07:58:40.57863742 +0000 UTC m=+0.040393499 container create 0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 07:58:40 compute-0 systemd[1]: Started libpod-conmon-0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f.scope.
Jan 31 07:58:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeffef75e9777097ce16edc5983e8086fa5d3fae0d2a258ff56f185d5b910858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeffef75e9777097ce16edc5983e8086fa5d3fae0d2a258ff56f185d5b910858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeffef75e9777097ce16edc5983e8086fa5d3fae0d2a258ff56f185d5b910858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeffef75e9777097ce16edc5983e8086fa5d3fae0d2a258ff56f185d5b910858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:40 compute-0 podman[90327]: 2026-01-31 07:58:40.561865605 +0000 UTC m=+0.023621724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:40 compute-0 podman[90327]: 2026-01-31 07:58:40.659358536 +0000 UTC m=+0.121114645 container init 0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 07:58:40 compute-0 podman[90327]: 2026-01-31 07:58:40.664592357 +0000 UTC m=+0.126348456 container start 0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:40 compute-0 podman[90327]: 2026-01-31 07:58:40.670535899 +0000 UTC m=+0.132292028 container attach 0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:41 compute-0 lvm[90418]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:41 compute-0 lvm[90418]: VG ceph_vg0 finished
Jan 31 07:58:41 compute-0 lvm[90421]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:58:41 compute-0 lvm[90421]: VG ceph_vg1 finished
Jan 31 07:58:41 compute-0 lvm[90423]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:58:41 compute-0 lvm[90423]: VG ceph_vg2 finished
Jan 31 07:58:41 compute-0 lvm[90424]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:41 compute-0 lvm[90424]: VG ceph_vg0 finished
Jan 31 07:58:41 compute-0 heuristic_swirles[90342]: {}
Jan 31 07:58:41 compute-0 ceph-mon[75294]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:41 compute-0 systemd[1]: libpod-0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f.scope: Deactivated successfully.
Jan 31 07:58:41 compute-0 podman[90327]: 2026-01-31 07:58:41.406482995 +0000 UTC m=+0.868239074 container died 0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 07:58:41 compute-0 systemd[1]: libpod-0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f.scope: Consumed 1.029s CPU time.
Jan 31 07:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeffef75e9777097ce16edc5983e8086fa5d3fae0d2a258ff56f185d5b910858-merged.mount: Deactivated successfully.
Jan 31 07:58:41 compute-0 podman[90327]: 2026-01-31 07:58:41.474323386 +0000 UTC m=+0.936079465 container remove 0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:58:41 compute-0 systemd[1]: libpod-conmon-0f4f938d4e950e81a4ac2f8f649669c515635f70cc4918a814c23ccd9786511f.scope: Deactivated successfully.
Jan 31 07:58:41 compute-0 sudo[90249]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:41 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:41 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:41 compute-0 sudo[90441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:58:41 compute-0 sudo[90441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:41 compute-0 sudo[90441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:41 compute-0 sudo[90466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:41 compute-0 sudo[90466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:41 compute-0 sudo[90466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:41 compute-0 sudo[90491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 07:58:41 compute-0 sudo[90491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:42 compute-0 podman[90559]: 2026-01-31 07:58:42.043559978 +0000 UTC m=+0.046318552 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 07:58:42 compute-0 podman[90559]: 2026-01-31 07:58:42.121544169 +0000 UTC m=+0.124302723 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:42 compute-0 sudo[90491]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:58:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:58:42 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:42 compute-0 sudo[90709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:42 compute-0 sudo[90709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:42 compute-0 sudo[90709]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:42 compute-0 sudo[90734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 07:58:42 compute-0 sudo[90734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:42 compute-0 podman[90772]: 2026-01-31 07:58:42.975812835 +0000 UTC m=+0.036027036 container create 4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sammet, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:58:43 compute-0 systemd[1]: Started libpod-conmon-4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41.scope.
Jan 31 07:58:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:43 compute-0 podman[90772]: 2026-01-31 07:58:43.048790994 +0000 UTC m=+0.109005225 container init 4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sammet, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 07:58:43 compute-0 podman[90772]: 2026-01-31 07:58:43.054378565 +0000 UTC m=+0.114592766 container start 4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 07:58:43 compute-0 podman[90772]: 2026-01-31 07:58:42.96032882 +0000 UTC m=+0.020543041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:43 compute-0 amazing_sammet[90789]: 167 167
Jan 31 07:58:43 compute-0 systemd[1]: libpod-4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41.scope: Deactivated successfully.
Jan 31 07:58:43 compute-0 podman[90772]: 2026-01-31 07:58:43.059383038 +0000 UTC m=+0.119597339 container attach 4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sammet, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:43 compute-0 podman[90772]: 2026-01-31 07:58:43.060643057 +0000 UTC m=+0.120857258 container died 4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-17270d07843be9b9ed891f22d7b95e264fa3587eade87139be9c22092f6ab075-merged.mount: Deactivated successfully.
Jan 31 07:58:43 compute-0 podman[90772]: 2026-01-31 07:58:43.097762566 +0000 UTC m=+0.157976767 container remove 4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sammet, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:43 compute-0 systemd[1]: libpod-conmon-4b9c96e328cbf5c677da5a73312d061b5ff4a3db1cf781ca75818630da35aa41.scope: Deactivated successfully.
Jan 31 07:58:43 compute-0 podman[90811]: 2026-01-31 07:58:43.212246188 +0000 UTC m=+0.044121485 container create 7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wu, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 07:58:43 compute-0 systemd[1]: Started libpod-conmon-7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f.scope.
Jan 31 07:58:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb9ac7470fb751097c102bbf266fece410001c97213145e86b23d306090571e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb9ac7470fb751097c102bbf266fece410001c97213145e86b23d306090571e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb9ac7470fb751097c102bbf266fece410001c97213145e86b23d306090571e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb9ac7470fb751097c102bbf266fece410001c97213145e86b23d306090571e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb9ac7470fb751097c102bbf266fece410001c97213145e86b23d306090571e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:43 compute-0 podman[90811]: 2026-01-31 07:58:43.187985494 +0000 UTC m=+0.019860811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:43 compute-0 podman[90811]: 2026-01-31 07:58:43.286723943 +0000 UTC m=+0.118599280 container init 7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:43 compute-0 podman[90811]: 2026-01-31 07:58:43.29481503 +0000 UTC m=+0.126690327 container start 7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:58:43 compute-0 podman[90811]: 2026-01-31 07:58:43.303721914 +0000 UTC m=+0.135597231 container attach 7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wu, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:58:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:58:43 compute-0 ceph-mon[75294]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:43 compute-0 infallible_wu[90827]: --> passed data devices: 0 physical, 3 LVM
Jan 31 07:58:43 compute-0 infallible_wu[90827]: --> All data devices are unavailable
Jan 31 07:58:43 compute-0 systemd[1]: libpod-7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f.scope: Deactivated successfully.
Jan 31 07:58:43 compute-0 podman[90811]: 2026-01-31 07:58:43.711695439 +0000 UTC m=+0.543570736 container died 7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wu, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdb9ac7470fb751097c102bbf266fece410001c97213145e86b23d306090571e-merged.mount: Deactivated successfully.
Jan 31 07:58:43 compute-0 podman[90811]: 2026-01-31 07:58:43.753911383 +0000 UTC m=+0.585786680 container remove 7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 31 07:58:43 compute-0 systemd[1]: libpod-conmon-7a4a12b131c094eec393b59cabdd689fde88b7a1aa8f1f464d3c09453cc2354f.scope: Deactivated successfully.
Jan 31 07:58:43 compute-0 sudo[90734]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:43 compute-0 sudo[90860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:43 compute-0 sudo[90860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:43 compute-0 sudo[90860]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:43 compute-0 sudo[90885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 07:58:43 compute-0 sudo[90885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:44 compute-0 podman[90923]: 2026-01-31 07:58:44.103690373 +0000 UTC m=+0.032456276 container create ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:44 compute-0 systemd[1]: Started libpod-conmon-ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb.scope.
Jan 31 07:58:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:44 compute-0 podman[90923]: 2026-01-31 07:58:44.165412267 +0000 UTC m=+0.094178200 container init ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:58:44 compute-0 podman[90923]: 2026-01-31 07:58:44.170898115 +0000 UTC m=+0.099664018 container start ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:58:44 compute-0 pedantic_pasteur[90939]: 167 167
Jan 31 07:58:44 compute-0 systemd[1]: libpod-ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb.scope: Deactivated successfully.
Jan 31 07:58:44 compute-0 podman[90923]: 2026-01-31 07:58:44.175890709 +0000 UTC m=+0.104656632 container attach ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:44 compute-0 podman[90923]: 2026-01-31 07:58:44.17660245 +0000 UTC m=+0.105368353 container died ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 07:58:44 compute-0 podman[90923]: 2026-01-31 07:58:44.089155638 +0000 UTC m=+0.017921601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d6605d3d6c7b63be9da0c6949da5fb6ee40b3c509fee43b19da2c286a2b6dec-merged.mount: Deactivated successfully.
Jan 31 07:58:44 compute-0 podman[90923]: 2026-01-31 07:58:44.212068198 +0000 UTC m=+0.140834111 container remove ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_pasteur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 07:58:44 compute-0 systemd[1]: libpod-conmon-ef2e1c06e795f8d4b55a33e9a9557216bd6381ab3712a5ba631617e5bce862eb.scope: Deactivated successfully.
Jan 31 07:58:44 compute-0 podman[90961]: 2026-01-31 07:58:44.317848043 +0000 UTC m=+0.034828620 container create 18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 07:58:44 compute-0 systemd[1]: Started libpod-conmon-18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705.scope.
Jan 31 07:58:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279556303c0abc0282b20011fd059ae3069b20a0a1b676222c591b9d4beb1512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279556303c0abc0282b20011fd059ae3069b20a0a1b676222c591b9d4beb1512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279556303c0abc0282b20011fd059ae3069b20a0a1b676222c591b9d4beb1512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279556303c0abc0282b20011fd059ae3069b20a0a1b676222c591b9d4beb1512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:44 compute-0 podman[90961]: 2026-01-31 07:58:44.393160183 +0000 UTC m=+0.110140780 container init 18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_zhukovsky, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 07:58:44 compute-0 podman[90961]: 2026-01-31 07:58:44.303157212 +0000 UTC m=+0.020137819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:44 compute-0 podman[90961]: 2026-01-31 07:58:44.399445566 +0000 UTC m=+0.116426143 container start 18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_zhukovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:44 compute-0 podman[90961]: 2026-01-31 07:58:44.40282307 +0000 UTC m=+0.119803657 container attach 18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]: {
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:     "0": [
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:         {
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "devices": [
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "/dev/loop3"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             ],
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_name": "ceph_lv0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_size": "21470642176",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "name": "ceph_lv0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "tags": {
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.crush_device_class": "",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.encrypted": "0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osd_id": "0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.type": "block",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.vdo": "0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.with_tpm": "0"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             },
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "type": "block",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "vg_name": "ceph_vg0"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:         }
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:     ],
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:     "1": [
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:         {
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "devices": [
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "/dev/loop4"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             ],
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_name": "ceph_lv1",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_size": "21470642176",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "name": "ceph_lv1",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "tags": {
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.crush_device_class": "",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.encrypted": "0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osd_id": "1",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.type": "block",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.vdo": "0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.with_tpm": "0"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             },
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "type": "block",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "vg_name": "ceph_vg1"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:         }
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:     ],
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:     "2": [
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:         {
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "devices": [
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "/dev/loop5"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             ],
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_name": "ceph_lv2",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_size": "21470642176",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "name": "ceph_lv2",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "tags": {
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.crush_device_class": "",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.encrypted": "0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.objectstore": "bluestore",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osd_id": "2",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.type": "block",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.vdo": "0",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:                 "ceph.with_tpm": "0"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             },
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "type": "block",
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:             "vg_name": "ceph_vg2"
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:         }
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]:     ]
Jan 31 07:58:44 compute-0 exciting_zhukovsky[90978]: }
Jan 31 07:58:44 compute-0 systemd[1]: libpod-18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705.scope: Deactivated successfully.
Jan 31 07:58:44 compute-0 podman[90961]: 2026-01-31 07:58:44.696124927 +0000 UTC m=+0.413105504 container died 18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-279556303c0abc0282b20011fd059ae3069b20a0a1b676222c591b9d4beb1512-merged.mount: Deactivated successfully.
Jan 31 07:58:44 compute-0 podman[90961]: 2026-01-31 07:58:44.735539325 +0000 UTC m=+0.452519902 container remove 18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 07:58:44 compute-0 systemd[1]: libpod-conmon-18e5041e0116ae0952050fc84076c8288634d5f30deb2252caec7a23896d7705.scope: Deactivated successfully.
Jan 31 07:58:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:44 compute-0 sudo[90885]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:44 compute-0 sudo[91000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:44 compute-0 sudo[91000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:44 compute-0 sudo[91000]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:44 compute-0 sudo[91025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 07:58:44 compute-0 sudo[91025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:45 compute-0 podman[91062]: 2026-01-31 07:58:45.101059677 +0000 UTC m=+0.032473897 container create 7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 07:58:45 compute-0 systemd[1]: Started libpod-conmon-7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160.scope.
Jan 31 07:58:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:45 compute-0 podman[91062]: 2026-01-31 07:58:45.161519433 +0000 UTC m=+0.092933663 container init 7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:45 compute-0 podman[91062]: 2026-01-31 07:58:45.168385712 +0000 UTC m=+0.099799932 container start 7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:45 compute-0 eager_matsumoto[91079]: 167 167
Jan 31 07:58:45 compute-0 systemd[1]: libpod-7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160.scope: Deactivated successfully.
Jan 31 07:58:45 compute-0 podman[91062]: 2026-01-31 07:58:45.172601932 +0000 UTC m=+0.104016152 container attach 7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:58:45 compute-0 podman[91062]: 2026-01-31 07:58:45.173664835 +0000 UTC m=+0.105079065 container died 7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:45 compute-0 podman[91062]: 2026-01-31 07:58:45.086682897 +0000 UTC m=+0.018097147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3929aaf8047ec5eb4f6978cd24ebeae927f4f2cb520b719b5183ee10948caaad-merged.mount: Deactivated successfully.
Jan 31 07:58:45 compute-0 podman[91062]: 2026-01-31 07:58:45.202552251 +0000 UTC m=+0.133966471 container remove 7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 07:58:45 compute-0 systemd[1]: libpod-conmon-7c6f46ddbc81cc8efd4d87298ba7b463fae229fc37cba43d8160d298674aa160.scope: Deactivated successfully.
Jan 31 07:58:45 compute-0 podman[91103]: 2026-01-31 07:58:45.31823397 +0000 UTC m=+0.037168241 container create 3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:45 compute-0 systemd[1]: Started libpod-conmon-3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a.scope.
Jan 31 07:58:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d3c701dc5226dae83e1a2516d9abca61e9206c3596ce5b181441436665fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d3c701dc5226dae83e1a2516d9abca61e9206c3596ce5b181441436665fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d3c701dc5226dae83e1a2516d9abca61e9206c3596ce5b181441436665fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d3c701dc5226dae83e1a2516d9abca61e9206c3596ce5b181441436665fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:45 compute-0 podman[91103]: 2026-01-31 07:58:45.394268963 +0000 UTC m=+0.113203234 container init 3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bhaskara, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 07:58:45 compute-0 podman[91103]: 2026-01-31 07:58:45.398492352 +0000 UTC m=+0.117426623 container start 3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 07:58:45 compute-0 podman[91103]: 2026-01-31 07:58:45.303514648 +0000 UTC m=+0.022448939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:58:45 compute-0 podman[91103]: 2026-01-31 07:58:45.401266417 +0000 UTC m=+0.120200698 container attach 3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:58:45 compute-0 ceph-mon[75294]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:45 compute-0 lvm[91196]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:58:45 compute-0 lvm[91196]: VG ceph_vg0 finished
Jan 31 07:58:45 compute-0 lvm[91199]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:58:45 compute-0 lvm[91199]: VG ceph_vg1 finished
Jan 31 07:58:45 compute-0 lvm[91200]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:58:45 compute-0 lvm[91200]: VG ceph_vg2 finished
Jan 31 07:58:46 compute-0 interesting_bhaskara[91120]: {}
Jan 31 07:58:46 compute-0 systemd[1]: libpod-3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a.scope: Deactivated successfully.
Jan 31 07:58:46 compute-0 podman[91103]: 2026-01-31 07:58:46.074472638 +0000 UTC m=+0.793406909 container died 3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:58:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-954d3c701dc5226dae83e1a2516d9abca61e9206c3596ce5b181441436665fa6-merged.mount: Deactivated successfully.
Jan 31 07:58:46 compute-0 podman[91103]: 2026-01-31 07:58:46.109319707 +0000 UTC m=+0.828253988 container remove 3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 07:58:46 compute-0 systemd[1]: libpod-conmon-3e36f31a1ad80f3eea37cf5b54f3c4c2172cba4f07ab6bfb3f3e80e415d5968a.scope: Deactivated successfully.
Jan 31 07:58:46 compute-0 sudo[91025]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:58:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:58:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:46 compute-0 sudo[91213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:58:46 compute-0 sudo[91213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:46 compute-0 sudo[91213]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:58:47 compute-0 ceph-mon[75294]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:49 compute-0 ceph-mon[75294]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_07:58:50
Jan 31 07:58:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:58:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 07:58:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.mgr']
Jan 31 07:58:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 07:58:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:51 compute-0 ceph-mon[75294]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:54 compute-0 ceph-mon[75294]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:55 compute-0 ceph-mon[75294]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:56 compute-0 sudo[91261]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujfosvjlmjoxbwblxdusbxdjxmxmreg ; /usr/bin/python3'
Jan 31 07:58:56 compute-0 sudo[91261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:57 compute-0 python3[91263]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:58:57 compute-0 podman[91265]: 2026-01-31 07:58:57.063728551 +0000 UTC m=+0.037718839 container create 920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf (image=quay.io/ceph/ceph:v20, name=nervous_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 07:58:57 compute-0 systemd[1]: Started libpod-conmon-920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf.scope.
Jan 31 07:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44fa263c39f7844dfa78e7bcd9e0b4ecdabd95f70db7dbecc33da98bab181c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44fa263c39f7844dfa78e7bcd9e0b4ecdabd95f70db7dbecc33da98bab181c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba44fa263c39f7844dfa78e7bcd9e0b4ecdabd95f70db7dbecc33da98bab181c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:57 compute-0 podman[91265]: 2026-01-31 07:58:57.046666297 +0000 UTC m=+0.020656605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:58:57 compute-0 podman[91265]: 2026-01-31 07:58:57.155491106 +0000 UTC m=+0.129481414 container init 920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf (image=quay.io/ceph/ceph:v20, name=nervous_sammet, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:57 compute-0 podman[91265]: 2026-01-31 07:58:57.162598344 +0000 UTC m=+0.136588632 container start 920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf (image=quay.io/ceph/ceph:v20, name=nervous_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:57 compute-0 podman[91265]: 2026-01-31 07:58:57.174059595 +0000 UTC m=+0.148049913 container attach 920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf (image=quay.io/ceph/ceph:v20, name=nervous_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:58:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 07:58:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684202794' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:58:57 compute-0 nervous_sammet[91280]: 
Jan 31 07:58:57 compute-0 nervous_sammet[91280]: {"fsid":"dc03f344-536f-5591-add9-31059f42637c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":149,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":18,"num_osds":3,"num_up_osds":3,"osd_up_since":1769846318,"num_in_osds":3,"osd_in_since":1769846291,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502964224,"bytes_avail":63908962304,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2026-01-31T07:56:24:478364+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-31T07:58:30.748381+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 07:58:57 compute-0 systemd[1]: libpod-920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf.scope: Deactivated successfully.
Jan 31 07:58:57 compute-0 podman[91265]: 2026-01-31 07:58:57.727543023 +0000 UTC m=+0.701533321 container died 920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf (image=quay.io/ceph/ceph:v20, name=nervous_sammet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 07:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba44fa263c39f7844dfa78e7bcd9e0b4ecdabd95f70db7dbecc33da98bab181c-merged.mount: Deactivated successfully.
Jan 31 07:58:57 compute-0 podman[91265]: 2026-01-31 07:58:57.769314395 +0000 UTC m=+0.743304683 container remove 920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf (image=quay.io/ceph/ceph:v20, name=nervous_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:58:57 compute-0 systemd[1]: libpod-conmon-920e5f1cbac809bb2eb7abf13a044daebe26f81afcb817e24c0f195a6c42ebcf.scope: Deactivated successfully.
Jan 31 07:58:57 compute-0 sudo[91261]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:57 compute-0 ceph-mon[75294]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:57 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1684202794' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:58:58 compute-0 sudo[91339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmotgdhykrtmezhslhaswgobzamzpli ; /usr/bin/python3'
Jan 31 07:58:58 compute-0 sudo[91339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:58 compute-0 python3[91341]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:58:58 compute-0 podman[91342]: 2026-01-31 07:58:58.177577779 +0000 UTC m=+0.020457559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:58:58 compute-0 podman[91342]: 2026-01-31 07:58:58.277221745 +0000 UTC m=+0.120101545 container create e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e (image=quay.io/ceph/ceph:v20, name=optimistic_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:58:58 compute-0 systemd[1]: Started libpod-conmon-e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e.scope.
Jan 31 07:58:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89261ed5ec29192123a69ccb5351bdbe15f0a58de7b7a67ba81317eb4973f77a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89261ed5ec29192123a69ccb5351bdbe15f0a58de7b7a67ba81317eb4973f77a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:58 compute-0 podman[91342]: 2026-01-31 07:58:58.371629701 +0000 UTC m=+0.214509511 container init e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e (image=quay.io/ceph/ceph:v20, name=optimistic_pare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:58 compute-0 podman[91342]: 2026-01-31 07:58:58.377373727 +0000 UTC m=+0.220253487 container start e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e (image=quay.io/ceph/ceph:v20, name=optimistic_pare, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:58 compute-0 podman[91342]: 2026-01-31 07:58:58.386206289 +0000 UTC m=+0.229086139 container attach e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e (image=quay.io/ceph/ceph:v20, name=optimistic_pare, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:58:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 07:58:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/503152164' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:58:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 07:58:58 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/503152164' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:58:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/503152164' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:58:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 31 07:58:58 compute-0 optimistic_pare[91358]: pool 'vms' created
Jan 31 07:58:58 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 31 07:58:58 compute-0 systemd[1]: libpod-e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e.scope: Deactivated successfully.
Jan 31 07:58:58 compute-0 podman[91342]: 2026-01-31 07:58:58.854806272 +0000 UTC m=+0.697686052 container died e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e (image=quay.io/ceph/ceph:v20, name=optimistic_pare, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-89261ed5ec29192123a69ccb5351bdbe15f0a58de7b7a67ba81317eb4973f77a-merged.mount: Deactivated successfully.
Jan 31 07:58:58 compute-0 podman[91342]: 2026-01-31 07:58:58.916250557 +0000 UTC m=+0.759130317 container remove e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e (image=quay.io/ceph/ceph:v20, name=optimistic_pare, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:58 compute-0 systemd[1]: libpod-conmon-e8e280fc90b001a2c0c6f1a6b84b732225023c3352bc663e48dce799178aad4e.scope: Deactivated successfully.
Jan 31 07:58:58 compute-0 sudo[91339]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:59 compute-0 sudo[91422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyyupwwwdlixbzjwqytozkxspyuldhnf ; /usr/bin/python3'
Jan 31 07:58:59 compute-0 sudo[91422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:59 compute-0 python3[91424]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:58:59 compute-0 podman[91425]: 2026-01-31 07:58:59.258997502 +0000 UTC m=+0.050836861 container create 73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8 (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 07:58:59 compute-0 systemd[1]: Started libpod-conmon-73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8.scope.
Jan 31 07:58:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ee81a3519dc36c42e13bf96853dc982cd73abc044e4a03b3a55dd4d7c95c48/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ee81a3519dc36c42e13bf96853dc982cd73abc044e4a03b3a55dd4d7c95c48/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:59 compute-0 podman[91425]: 2026-01-31 07:58:59.231459037 +0000 UTC m=+0.023298476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:58:59 compute-0 podman[91425]: 2026-01-31 07:58:59.335004583 +0000 UTC m=+0.126843942 container init 73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8 (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:59 compute-0 podman[91425]: 2026-01-31 07:58:59.340179502 +0000 UTC m=+0.132018861 container start 73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8 (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:58:59 compute-0 podman[91425]: 2026-01-31 07:58:59.347459235 +0000 UTC m=+0.139298594 container attach 73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8 (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:58:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:58:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 07:58:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1866346882' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:58:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 07:58:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1866346882' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:58:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 31 07:58:59 compute-0 quizzical_joliot[91440]: pool 'volumes' created
Jan 31 07:58:59 compute-0 systemd[1]: libpod-73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8.scope: Deactivated successfully.
Jan 31 07:58:59 compute-0 podman[91425]: 2026-01-31 07:58:59.999622271 +0000 UTC m=+0.791461620 container died 73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8 (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:59:00 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 31 07:59:00 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:59:00 compute-0 ceph-mon[75294]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:00 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/503152164' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:00 compute-0 ceph-mon[75294]: osdmap e19: 3 total, 3 up, 3 in
Jan 31 07:59:00 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1866346882' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ee81a3519dc36c42e13bf96853dc982cd73abc044e4a03b3a55dd4d7c95c48-merged.mount: Deactivated successfully.
Jan 31 07:59:00 compute-0 podman[91425]: 2026-01-31 07:59:00.632701821 +0000 UTC m=+1.424541180 container remove 73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8 (image=quay.io/ceph/ceph:v20, name=quizzical_joliot, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:00 compute-0 sudo[91422]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:00 compute-0 systemd[1]: libpod-conmon-73b66a38345ae6689a23d0a4c96639c575ae9ddb2e86fab8eb5efd7b4771ffd8.scope: Deactivated successfully.
Jan 31 07:59:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v76: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:00 compute-0 sudo[91503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsuaqcjhlnhlexmzrmsrxymldjukeqz ; /usr/bin/python3'
Jan 31 07:59:00 compute-0 sudo[91503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:00 compute-0 python3[91505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 07:59:01 compute-0 podman[91506]: 2026-01-31 07:59:00.996035057 +0000 UTC m=+0.022056598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 31 07:59:01 compute-0 podman[91506]: 2026-01-31 07:59:01.218874792 +0000 UTC m=+0.244896313 container create 41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a (image=quay.io/ceph/ceph:v20, name=busy_hermann, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 07:59:01 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 31 07:59:01 compute-0 systemd[1]: Started libpod-conmon-41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a.scope.
Jan 31 07:59:01 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1866346882' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:01 compute-0 ceph-mon[75294]: osdmap e20: 3 total, 3 up, 3 in
Jan 31 07:59:01 compute-0 ceph-mon[75294]: pgmap v76: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629abcc8749a4788098851010afc09cc8a7042cc64bb361b7e4ffab754d85d42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629abcc8749a4788098851010afc09cc8a7042cc64bb361b7e4ffab754d85d42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:01 compute-0 podman[91506]: 2026-01-31 07:59:01.421521388 +0000 UTC m=+0.447542989 container init 41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a (image=quay.io/ceph/ceph:v20, name=busy_hermann, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 07:59:01 compute-0 podman[91506]: 2026-01-31 07:59:01.429990748 +0000 UTC m=+0.456012269 container start 41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a (image=quay.io/ceph/ceph:v20, name=busy_hermann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:01 compute-0 podman[91506]: 2026-01-31 07:59:01.475642578 +0000 UTC m=+0.501664099 container attach 41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a (image=quay.io/ceph/ceph:v20, name=busy_hermann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 07:59:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 07:59:01 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3754888174' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 07:59:02 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3754888174' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 31 07:59:02 compute-0 busy_hermann[91521]: pool 'backups' created
Jan 31 07:59:02 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 31 07:59:02 compute-0 systemd[1]: libpod-41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a.scope: Deactivated successfully.
Jan 31 07:59:02 compute-0 podman[91506]: 2026-01-31 07:59:02.261737822 +0000 UTC m=+1.287759383 container died 41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a (image=quay.io/ceph/ceph:v20, name=busy_hermann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:02 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=20/22 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:59:02 compute-0 ceph-mon[75294]: osdmap e21: 3 total, 3 up, 3 in
Jan 31 07:59:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3754888174' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3754888174' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:02 compute-0 ceph-mon[75294]: osdmap e22: 3 total, 3 up, 3 in
Jan 31 07:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-629abcc8749a4788098851010afc09cc8a7042cc64bb361b7e4ffab754d85d42-merged.mount: Deactivated successfully.
Jan 31 07:59:02 compute-0 podman[91506]: 2026-01-31 07:59:02.457400064 +0000 UTC m=+1.483421585 container remove 41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a (image=quay.io/ceph/ceph:v20, name=busy_hermann, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:02 compute-0 systemd[1]: libpod-conmon-41d60863e011d63cca0417cd4ff9ea609d1433fe0a1e7a67eeeb7eac6b9ae70a.scope: Deactivated successfully.
Jan 31 07:59:02 compute-0 sudo[91503]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:02 compute-0 sudo[91583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxhlkjhbrrsrpryoyqitzrjnrzmvjij ; /usr/bin/python3'
Jan 31 07:59:02 compute-0 sudo[91583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:02 compute-0 python3[91585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v79: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:02 compute-0 podman[91586]: 2026-01-31 07:59:02.764190355 +0000 UTC m=+0.038378428 container create 5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b (image=quay.io/ceph/ceph:v20, name=strange_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 07:59:02 compute-0 systemd[1]: Started libpod-conmon-5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b.scope.
Jan 31 07:59:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b99a9c08d5548c999c8a9edcf98453863662f2eaf9f8262c207360a2386cf31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b99a9c08d5548c999c8a9edcf98453863662f2eaf9f8262c207360a2386cf31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:02 compute-0 podman[91586]: 2026-01-31 07:59:02.841434395 +0000 UTC m=+0.115622468 container init 5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b (image=quay.io/ceph/ceph:v20, name=strange_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:02 compute-0 podman[91586]: 2026-01-31 07:59:02.745033127 +0000 UTC m=+0.019221220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:02 compute-0 podman[91586]: 2026-01-31 07:59:02.846012855 +0000 UTC m=+0.120200928 container start 5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b (image=quay.io/ceph/ceph:v20, name=strange_margulis, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:02 compute-0 podman[91586]: 2026-01-31 07:59:02.853545646 +0000 UTC m=+0.127733739 container attach 5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b (image=quay.io/ceph/ceph:v20, name=strange_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 07:59:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 07:59:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 31 07:59:03 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 31 07:59:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 07:59:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3192348014' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:03 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:59:03 compute-0 ceph-mon[75294]: pgmap v79: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:03 compute-0 ceph-mon[75294]: osdmap e23: 3 total, 3 up, 3 in
Jan 31 07:59:03 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3192348014' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:03 compute-0 sshd-session[91624]: Invalid user solana from 193.32.162.145 port 57366
Jan 31 07:59:03 compute-0 sshd-session[91624]: Connection closed by invalid user solana 193.32.162.145 port 57366 [preauth]
Jan 31 07:59:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 07:59:04 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3192348014' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 31 07:59:04 compute-0 strange_margulis[91601]: pool 'images' created
Jan 31 07:59:04 compute-0 systemd[1]: libpod-5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b.scope: Deactivated successfully.
Jan 31 07:59:04 compute-0 podman[91586]: 2026-01-31 07:59:04.466599967 +0000 UTC m=+1.740788030 container died 5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b (image=quay.io/ceph/ceph:v20, name=strange_margulis, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:04 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 31 07:59:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b99a9c08d5548c999c8a9edcf98453863662f2eaf9f8262c207360a2386cf31-merged.mount: Deactivated successfully.
Jan 31 07:59:04 compute-0 podman[91586]: 2026-01-31 07:59:04.728046938 +0000 UTC m=+2.002235051 container remove 5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b (image=quay.io/ceph/ceph:v20, name=strange_margulis, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:59:04 compute-0 sudo[91583]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v82: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:04 compute-0 systemd[1]: libpod-conmon-5f5eb013dfba074e638afe0cc483c0585be03d6d7b59ee43f50f67ee99772d3b.scope: Deactivated successfully.
Jan 31 07:59:04 compute-0 sudo[91666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srslvcywxlidjpnhohzmgcpbmnncukcn ; /usr/bin/python3'
Jan 31 07:59:04 compute-0 sudo[91666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:05 compute-0 python3[91668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:05 compute-0 podman[91669]: 2026-01-31 07:59:05.061390453 +0000 UTC m=+0.043052632 container create 43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8 (image=quay.io/ceph/ceph:v20, name=cool_gagarin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 07:59:05 compute-0 systemd[1]: Started libpod-conmon-43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8.scope.
Jan 31 07:59:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d8d98fa8c1510b2cf3bc59cffd8048df2338215c1db271e2fb065d9e36d0f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d8d98fa8c1510b2cf3bc59cffd8048df2338215c1db271e2fb065d9e36d0f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:05 compute-0 podman[91669]: 2026-01-31 07:59:05.042834494 +0000 UTC m=+0.024496683 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:05 compute-0 podman[91669]: 2026-01-31 07:59:05.142101049 +0000 UTC m=+0.123763248 container init 43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8 (image=quay.io/ceph/ceph:v20, name=cool_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:05 compute-0 podman[91669]: 2026-01-31 07:59:05.147767583 +0000 UTC m=+0.129429762 container start 43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8 (image=quay.io/ceph/ceph:v20, name=cool_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:05 compute-0 podman[91669]: 2026-01-31 07:59:05.151311001 +0000 UTC m=+0.132973210 container attach 43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8 (image=quay.io/ceph/ceph:v20, name=cool_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 07:59:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 31 07:59:05 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3192348014' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:05 compute-0 ceph-mon[75294]: osdmap e24: 3 total, 3 up, 3 in
Jan 31 07:59:05 compute-0 ceph-mon[75294]: pgmap v82: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:05 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 31 07:59:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 07:59:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1406802696' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:59:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 07:59:06 compute-0 ceph-mon[75294]: osdmap e25: 3 total, 3 up, 3 in
Jan 31 07:59:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1406802696' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v84: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1406802696' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 31 07:59:06 compute-0 cool_gagarin[91686]: pool 'cephfs.cephfs.meta' created
Jan 31 07:59:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 31 07:59:06 compute-0 systemd[1]: libpod-43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8.scope: Deactivated successfully.
Jan 31 07:59:06 compute-0 conmon[91686]: conmon 43cb5545c195f59a27bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8.scope/container/memory.events
Jan 31 07:59:06 compute-0 podman[91669]: 2026-01-31 07:59:06.820862026 +0000 UTC m=+1.802524205 container died 43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8 (image=quay.io/ceph/ceph:v20, name=cool_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 07:59:06 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-47d8d98fa8c1510b2cf3bc59cffd8048df2338215c1db271e2fb065d9e36d0f8-merged.mount: Deactivated successfully.
Jan 31 07:59:06 compute-0 systemd[76681]: Starting Mark boot as successful...
Jan 31 07:59:06 compute-0 systemd[76681]: Finished Mark boot as successful.
Jan 31 07:59:07 compute-0 podman[91669]: 2026-01-31 07:59:07.118131034 +0000 UTC m=+2.099793213 container remove 43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8 (image=quay.io/ceph/ceph:v20, name=cool_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:07 compute-0 systemd[1]: libpod-conmon-43cb5545c195f59a27bc9d6f28820b63c8085a250fd9dcc8d0848673a1dacda8.scope: Deactivated successfully.
Jan 31 07:59:07 compute-0 sudo[91666]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:07 compute-0 sudo[91751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojvfjnbqctmntwxlvaqzesxfnbmokjax ; /usr/bin/python3'
Jan 31 07:59:07 compute-0 sudo[91751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:07 compute-0 python3[91753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:07 compute-0 podman[91754]: 2026-01-31 07:59:07.504050823 +0000 UTC m=+0.101693431 container create 6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2 (image=quay.io/ceph/ceph:v20, name=objective_saha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 07:59:07 compute-0 podman[91754]: 2026-01-31 07:59:07.430683132 +0000 UTC m=+0.028325750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:07 compute-0 systemd[1]: Started libpod-conmon-6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2.scope.
Jan 31 07:59:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4162aa4e20e711cd60ffbe1a4d6acb4dd3e75ad7c5809c3f9db930c5e9d576ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4162aa4e20e711cd60ffbe1a4d6acb4dd3e75ad7c5809c3f9db930c5e9d576ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:07 compute-0 podman[91754]: 2026-01-31 07:59:07.741772695 +0000 UTC m=+0.339415373 container init 6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2 (image=quay.io/ceph/ceph:v20, name=objective_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 07:59:07 compute-0 podman[91754]: 2026-01-31 07:59:07.747759009 +0000 UTC m=+0.345401627 container start 6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2 (image=quay.io/ceph/ceph:v20, name=objective_saha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 31 07:59:07 compute-0 podman[91754]: 2026-01-31 07:59:07.783169365 +0000 UTC m=+0.380811943 container attach 6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2 (image=quay.io/ceph/ceph:v20, name=objective_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 07:59:07 compute-0 ceph-mon[75294]: pgmap v84: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:07 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1406802696' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:07 compute-0 ceph-mon[75294]: osdmap e26: 3 total, 3 up, 3 in
Jan 31 07:59:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 31 07:59:08 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 31 07:59:08 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:59:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 07:59:08 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518899386' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v87: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 07:59:09 compute-0 ceph-mon[75294]: osdmap e27: 3 total, 3 up, 3 in
Jan 31 07:59:09 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3518899386' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 07:59:09 compute-0 ceph-mon[75294]: pgmap v87: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:09 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3518899386' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 31 07:59:09 compute-0 objective_saha[91769]: pool 'cephfs.cephfs.data' created
Jan 31 07:59:09 compute-0 systemd[1]: libpod-6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2.scope: Deactivated successfully.
Jan 31 07:59:09 compute-0 podman[91754]: 2026-01-31 07:59:09.219102903 +0000 UTC m=+1.816745481 container died 6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2 (image=quay.io/ceph/ceph:v20, name=objective_saha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 07:59:09 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 31 07:59:09 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4162aa4e20e711cd60ffbe1a4d6acb4dd3e75ad7c5809c3f9db930c5e9d576ff-merged.mount: Deactivated successfully.
Jan 31 07:59:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:09 compute-0 podman[91754]: 2026-01-31 07:59:09.461606402 +0000 UTC m=+2.059248990 container remove 6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2 (image=quay.io/ceph/ceph:v20, name=objective_saha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:09 compute-0 systemd[1]: libpod-conmon-6612e26d1caf66bfc67a20f6f7a2513e017b7716f55bb9fbf48935cbf4f089f2.scope: Deactivated successfully.
Jan 31 07:59:09 compute-0 sudo[91751]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:09 compute-0 sudo[91833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhbujaovuahgbbvgyvhfjforjelkjkfh ; /usr/bin/python3'
Jan 31 07:59:09 compute-0 sudo[91833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:09 compute-0 python3[91835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:09 compute-0 podman[91836]: 2026-01-31 07:59:09.875027824 +0000 UTC m=+0.096840592 container create 79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc (image=quay.io/ceph/ceph:v20, name=brave_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:09 compute-0 podman[91836]: 2026-01-31 07:59:09.795973409 +0000 UTC m=+0.017786197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:09 compute-0 systemd[1]: Started libpod-conmon-79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc.scope.
Jan 31 07:59:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009baa8fd19dc606e948031e37ab47aea4f9b7cf0c0834232b4f510ae55c3c14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009baa8fd19dc606e948031e37ab47aea4f9b7cf0c0834232b4f510ae55c3c14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:10 compute-0 podman[91836]: 2026-01-31 07:59:10.08355488 +0000 UTC m=+0.305367658 container init 79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc (image=quay.io/ceph/ceph:v20, name=brave_khayyam, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 07:59:10 compute-0 podman[91836]: 2026-01-31 07:59:10.087306936 +0000 UTC m=+0.309119724 container start 79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc (image=quay.io/ceph/ceph:v20, name=brave_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:59:10 compute-0 podman[91836]: 2026-01-31 07:59:10.141815008 +0000 UTC m=+0.363627816 container attach 79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc (image=quay.io/ceph/ceph:v20, name=brave_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 07:59:10 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3518899386' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:59:10 compute-0 ceph-mon[75294]: osdmap e28: 3 total, 3 up, 3 in
Jan 31 07:59:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 31 07:59:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3416390795' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 07:59:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 31 07:59:10 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 31 07:59:10 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:59:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v90: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:11 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3416390795' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 07:59:11 compute-0 ceph-mon[75294]: osdmap e29: 3 total, 3 up, 3 in
Jan 31 07:59:11 compute-0 ceph-mon[75294]: pgmap v90: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 07:59:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3416390795' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 07:59:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 31 07:59:11 compute-0 brave_khayyam[91852]: enabled application 'rbd' on pool 'vms'
Jan 31 07:59:11 compute-0 systemd[1]: libpod-79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc.scope: Deactivated successfully.
Jan 31 07:59:11 compute-0 podman[91836]: 2026-01-31 07:59:11.716992747 +0000 UTC m=+1.938805515 container died 79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc (image=quay.io/ceph/ceph:v20, name=brave_khayyam, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 07:59:11 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 31 07:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-009baa8fd19dc606e948031e37ab47aea4f9b7cf0c0834232b4f510ae55c3c14-merged.mount: Deactivated successfully.
Jan 31 07:59:12 compute-0 podman[91836]: 2026-01-31 07:59:12.302576619 +0000 UTC m=+2.524389387 container remove 79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc (image=quay.io/ceph/ceph:v20, name=brave_khayyam, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:12 compute-0 systemd[1]: libpod-conmon-79080c45fec9ff0f5e6b08876a66cdcb2e26fbdd3ca52c334c208c831228f0bc.scope: Deactivated successfully.
Jan 31 07:59:12 compute-0 sudo[91833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:12 compute-0 sudo[91912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kawyolphnhvfleaagyeumrzbfqvuqmpi ; /usr/bin/python3'
Jan 31 07:59:12 compute-0 sudo[91912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:12 compute-0 python3[91914]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:12 compute-0 podman[91915]: 2026-01-31 07:59:12.609287478 +0000 UTC m=+0.020519901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:12 compute-0 podman[91915]: 2026-01-31 07:59:12.754871664 +0000 UTC m=+0.166104047 container create 563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d (image=quay.io/ceph/ceph:v20, name=sad_mestorf, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:59:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v92: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:12 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3416390795' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 07:59:12 compute-0 ceph-mon[75294]: osdmap e30: 3 total, 3 up, 3 in
Jan 31 07:59:12 compute-0 systemd[1]: Started libpod-conmon-563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d.scope.
Jan 31 07:59:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874a49d265ca3a2193fed2dc824479427fba149627659d8dfeb1ef1ef62e47ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874a49d265ca3a2193fed2dc824479427fba149627659d8dfeb1ef1ef62e47ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:13 compute-0 podman[91915]: 2026-01-31 07:59:13.058417935 +0000 UTC m=+0.469650348 container init 563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d (image=quay.io/ceph/ceph:v20, name=sad_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:13 compute-0 podman[91915]: 2026-01-31 07:59:13.063215602 +0000 UTC m=+0.474447985 container start 563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d (image=quay.io/ceph/ceph:v20, name=sad_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:13 compute-0 podman[91915]: 2026-01-31 07:59:13.177047774 +0000 UTC m=+0.588280157 container attach 563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d (image=quay.io/ceph/ceph:v20, name=sad_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 31 07:59:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3826509323' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 07:59:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 07:59:14 compute-0 ceph-mon[75294]: pgmap v92: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:14 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3826509323' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 07:59:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3826509323' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 07:59:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 31 07:59:14 compute-0 sad_mestorf[91930]: enabled application 'rbd' on pool 'volumes'
Jan 31 07:59:14 compute-0 systemd[1]: libpod-563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d.scope: Deactivated successfully.
Jan 31 07:59:14 compute-0 podman[91915]: 2026-01-31 07:59:14.417321951 +0000 UTC m=+1.828554334 container died 563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d (image=quay.io/ceph/ceph:v20, name=sad_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 07:59:14 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 31 07:59:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v94: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-874a49d265ca3a2193fed2dc824479427fba149627659d8dfeb1ef1ef62e47ff-merged.mount: Deactivated successfully.
Jan 31 07:59:15 compute-0 podman[91915]: 2026-01-31 07:59:15.101792528 +0000 UTC m=+2.513024911 container remove 563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d (image=quay.io/ceph/ceph:v20, name=sad_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:15 compute-0 sudo[91912]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:15 compute-0 systemd[1]: libpod-conmon-563f80e0214653a7035e736e8119ea28c75ccfc4cd7d31dfbf10736e9f07545d.scope: Deactivated successfully.
Jan 31 07:59:15 compute-0 sudo[91991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpswsckqfrybbsqixeyhcexragqrqusq ; /usr/bin/python3'
Jan 31 07:59:15 compute-0 sudo[91991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:15 compute-0 python3[91993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:15 compute-0 podman[91994]: 2026-01-31 07:59:15.506825782 +0000 UTC m=+0.021310595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:15 compute-0 podman[91994]: 2026-01-31 07:59:15.773218014 +0000 UTC m=+0.287702807 container create 97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7 (image=quay.io/ceph/ceph:v20, name=adoring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:15 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3826509323' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 07:59:15 compute-0 ceph-mon[75294]: osdmap e31: 3 total, 3 up, 3 in
Jan 31 07:59:15 compute-0 ceph-mon[75294]: pgmap v94: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:16 compute-0 systemd[1]: Started libpod-conmon-97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7.scope.
Jan 31 07:59:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567eee3512053205631daa6d6ca80090711900078c471d726186696ef4579024/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567eee3512053205631daa6d6ca80090711900078c471d726186696ef4579024/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:16 compute-0 podman[91994]: 2026-01-31 07:59:16.165847958 +0000 UTC m=+0.680332761 container init 97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7 (image=quay.io/ceph/ceph:v20, name=adoring_williamson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 07:59:16 compute-0 podman[91994]: 2026-01-31 07:59:16.170293594 +0000 UTC m=+0.684778387 container start 97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7 (image=quay.io/ceph/ceph:v20, name=adoring_williamson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 07:59:16 compute-0 podman[91994]: 2026-01-31 07:59:16.328373684 +0000 UTC m=+0.842858477 container attach 97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7 (image=quay.io/ceph/ceph:v20, name=adoring_williamson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 31 07:59:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4090985318' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 07:59:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v95: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 07:59:17 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4090985318' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 07:59:17 compute-0 ceph-mon[75294]: pgmap v95: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:17 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4090985318' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 07:59:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 07:59:17 compute-0 adoring_williamson[92009]: enabled application 'rbd' on pool 'backups'
Jan 31 07:59:17 compute-0 systemd[1]: libpod-97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7.scope: Deactivated successfully.
Jan 31 07:59:17 compute-0 podman[91994]: 2026-01-31 07:59:17.362466865 +0000 UTC m=+1.876951678 container died 97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7 (image=quay.io/ceph/ceph:v20, name=adoring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:17 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 31 07:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-567eee3512053205631daa6d6ca80090711900078c471d726186696ef4579024-merged.mount: Deactivated successfully.
Jan 31 07:59:18 compute-0 podman[91994]: 2026-01-31 07:59:18.092400095 +0000 UTC m=+2.606884888 container remove 97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7 (image=quay.io/ceph/ceph:v20, name=adoring_williamson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 07:59:18 compute-0 sudo[91991]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:18 compute-0 systemd[1]: libpod-conmon-97f2e261b73234e05ec1f2f8e72650d3deb9ed74aa62c74e5a58b63c31c038d7.scope: Deactivated successfully.
Jan 31 07:59:18 compute-0 sudo[92069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azfkypglbmptufhkhoycidmorrdadmpw ; /usr/bin/python3'
Jan 31 07:59:18 compute-0 sudo[92069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:18 compute-0 python3[92071]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:18 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4090985318' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 07:59:18 compute-0 ceph-mon[75294]: osdmap e32: 3 total, 3 up, 3 in
Jan 31 07:59:18 compute-0 podman[92072]: 2026-01-31 07:59:18.398775953 +0000 UTC m=+0.022501441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:18 compute-0 podman[92072]: 2026-01-31 07:59:18.583228902 +0000 UTC m=+0.206954400 container create 7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e (image=quay.io/ceph/ceph:v20, name=serene_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:18 compute-0 systemd[1]: Started libpod-conmon-7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e.scope.
Jan 31 07:59:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db5b0b906715211d70d9611876f38cadf45316353bd186c9a13ea55315bdf18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db5b0b906715211d70d9611876f38cadf45316353bd186c9a13ea55315bdf18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v97: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:18 compute-0 podman[92072]: 2026-01-31 07:59:18.822068858 +0000 UTC m=+0.445794316 container init 7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e (image=quay.io/ceph/ceph:v20, name=serene_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:59:18 compute-0 podman[92072]: 2026-01-31 07:59:18.82803389 +0000 UTC m=+0.451759348 container start 7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e (image=quay.io/ceph/ceph:v20, name=serene_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:59:18 compute-0 podman[92072]: 2026-01-31 07:59:18.864305113 +0000 UTC m=+0.488030591 container attach 7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e (image=quay.io/ceph/ceph:v20, name=serene_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 31 07:59:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/838430307' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 07:59:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 07:59:19 compute-0 ceph-mon[75294]: pgmap v97: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/838430307' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 07:59:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/838430307' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 07:59:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 07:59:20 compute-0 serene_margulis[92087]: enabled application 'rbd' on pool 'images'
Jan 31 07:59:20 compute-0 systemd[1]: libpod-7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e.scope: Deactivated successfully.
Jan 31 07:59:20 compute-0 podman[92072]: 2026-01-31 07:59:20.116718362 +0000 UTC m=+1.740443820 container died 7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e (image=quay.io/ceph/ceph:v20, name=serene_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 07:59:20 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 31 07:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7db5b0b906715211d70d9611876f38cadf45316353bd186c9a13ea55315bdf18-merged.mount: Deactivated successfully.
Jan 31 07:59:20 compute-0 podman[92072]: 2026-01-31 07:59:20.608868989 +0000 UTC m=+2.232594457 container remove 7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e (image=quay.io/ceph/ceph:v20, name=serene_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:20 compute-0 sudo[92069]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:20 compute-0 systemd[1]: libpod-conmon-7672ce93a3603af739080599c4303bf4d3eaed1e228bc9c706a0e39f6a36640e.scope: Deactivated successfully.
Jan 31 07:59:20 compute-0 sudo[92148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypmzscsxrdogwughzkoshpjkytfpznbg ; /usr/bin/python3'
Jan 31 07:59:20 compute-0 sudo[92148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v99: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:20 compute-0 python3[92150]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:21 compute-0 podman[92151]: 2026-01-31 07:59:21.013304235 +0000 UTC m=+0.118920599 container create be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59 (image=quay.io/ceph/ceph:v20, name=compassionate_rhodes, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:59:21 compute-0 podman[92151]: 2026-01-31 07:59:20.919282211 +0000 UTC m=+0.024898625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:21 compute-0 systemd[1]: Started libpod-conmon-be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59.scope.
Jan 31 07:59:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c2baa2c1c28781d6900d5bdf333529e62d06177ea728178ec2de405b73ae40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c2baa2c1c28781d6900d5bdf333529e62d06177ea728178ec2de405b73ae40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:21 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/838430307' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 07:59:21 compute-0 ceph-mon[75294]: osdmap e33: 3 total, 3 up, 3 in
Jan 31 07:59:21 compute-0 ceph-mon[75294]: pgmap v99: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:21 compute-0 podman[92151]: 2026-01-31 07:59:21.326964907 +0000 UTC m=+0.432581371 container init be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59 (image=quay.io/ceph/ceph:v20, name=compassionate_rhodes, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:21 compute-0 podman[92151]: 2026-01-31 07:59:21.334769916 +0000 UTC m=+0.440386320 container start be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59 (image=quay.io/ceph/ceph:v20, name=compassionate_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:59:21 compute-0 podman[92151]: 2026-01-31 07:59:21.477447893 +0000 UTC m=+0.583064357 container attach be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59 (image=quay.io/ceph/ceph:v20, name=compassionate_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 07:59:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 31 07:59:21 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1774156913' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 07:59:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 07:59:22 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1774156913' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 07:59:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1774156913' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 07:59:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 07:59:22 compute-0 compassionate_rhodes[92166]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 07:59:22 compute-0 systemd[1]: libpod-be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59.scope: Deactivated successfully.
Jan 31 07:59:22 compute-0 podman[92151]: 2026-01-31 07:59:22.835263335 +0000 UTC m=+1.940879729 container died be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59 (image=quay.io/ceph/ceph:v20, name=compassionate_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 07:59:22 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 31 07:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-77c2baa2c1c28781d6900d5bdf333529e62d06177ea728178ec2de405b73ae40-merged.mount: Deactivated successfully.
Jan 31 07:59:23 compute-0 podman[92151]: 2026-01-31 07:59:23.217936777 +0000 UTC m=+2.323553141 container remove be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59 (image=quay.io/ceph/ceph:v20, name=compassionate_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:23 compute-0 sudo[92148]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:23 compute-0 systemd[1]: libpod-conmon-be36b8e42a96f70eede9f7cb7019a992ae3f2b8160a17398c5daa3ae0f2e9c59.scope: Deactivated successfully.
Jan 31 07:59:23 compute-0 sudo[92227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmaffovaprgvtgufjjgfomakdxsxtunv ; /usr/bin/python3'
Jan 31 07:59:23 compute-0 sudo[92227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:23 compute-0 python3[92229]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:23 compute-0 podman[92230]: 2026-01-31 07:59:23.581521313 +0000 UTC m=+0.072696163 container create 9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7 (image=quay.io/ceph/ceph:v20, name=compassionate_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:23 compute-0 systemd[1]: Started libpod-conmon-9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7.scope.
Jan 31 07:59:23 compute-0 podman[92230]: 2026-01-31 07:59:23.533599422 +0000 UTC m=+0.024774292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c112592fc524d78aa9083175795c368002ec0dd6674d4e0c189d472dc6eeece8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c112592fc524d78aa9083175795c368002ec0dd6674d4e0c189d472dc6eeece8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:23 compute-0 podman[92230]: 2026-01-31 07:59:23.701050747 +0000 UTC m=+0.192225607 container init 9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7 (image=quay.io/ceph/ceph:v20, name=compassionate_easley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:23 compute-0 podman[92230]: 2026-01-31 07:59:23.70520456 +0000 UTC m=+0.196379400 container start 9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7 (image=quay.io/ceph/ceph:v20, name=compassionate_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:59:23 compute-0 podman[92230]: 2026-01-31 07:59:23.719737134 +0000 UTC m=+0.210912064 container attach 9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7 (image=quay.io/ceph/ceph:v20, name=compassionate_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:59:24 compute-0 ceph-mon[75294]: pgmap v100: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:24 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1774156913' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 07:59:24 compute-0 ceph-mon[75294]: osdmap e34: 3 total, 3 up, 3 in
Jan 31 07:59:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 31 07:59:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/167799388' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 07:59:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v102: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 07:59:25 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/167799388' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 07:59:25 compute-0 ceph-mon[75294]: pgmap v102: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/167799388' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 07:59:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 07:59:25 compute-0 compassionate_easley[92246]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 07:59:25 compute-0 systemd[1]: libpod-9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7.scope: Deactivated successfully.
Jan 31 07:59:25 compute-0 podman[92230]: 2026-01-31 07:59:25.677427949 +0000 UTC m=+2.168602789 container died 9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7 (image=quay.io/ceph/ceph:v20, name=compassionate_easley, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 07:59:25 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 31 07:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c112592fc524d78aa9083175795c368002ec0dd6674d4e0c189d472dc6eeece8-merged.mount: Deactivated successfully.
Jan 31 07:59:26 compute-0 podman[92230]: 2026-01-31 07:59:26.255848425 +0000 UTC m=+2.747023265 container remove 9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7 (image=quay.io/ceph/ceph:v20, name=compassionate_easley, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:26 compute-0 sudo[92227]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:26 compute-0 systemd[1]: libpod-conmon-9d8c272cd3f082c879a6fdaae758436543114428e29ba0904254c339d6a4a2d7.scope: Deactivated successfully.
Jan 31 07:59:26 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/167799388' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 07:59:26 compute-0 ceph-mon[75294]: osdmap e35: 3 total, 3 up, 3 in
Jan 31 07:59:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v104: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:27 compute-0 python3[92359]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:59:27 compute-0 python3[92430]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846366.8371806-36904-22812507542193/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:59:27 compute-0 sudo[92530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogvbzaqevfyynogcztlbzyvhquitqrb ; /usr/bin/python3'
Jan 31 07:59:27 compute-0 sudo[92530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:27 compute-0 ceph-mon[75294]: pgmap v104: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:27 compute-0 python3[92532]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:59:27 compute-0 sudo[92530]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:28 compute-0 sudo[92605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpyjkohqnlgeimtciwfvnmzahtfgfxrq ; /usr/bin/python3'
Jan 31 07:59:28 compute-0 sudo[92605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:28 compute-0 python3[92607]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846367.6501198-36918-203108570199258/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=2a68cf8cf81ae7f51f9e7a51ba2130e932f03026 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:59:28 compute-0 sudo[92605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:28 compute-0 sudo[92655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kudjuxilgvxxjwdsgeonqsphophlcozn ; /usr/bin/python3'
Jan 31 07:59:28 compute-0 sudo[92655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:28 compute-0 python3[92657]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v105: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:28 compute-0 podman[92658]: 2026-01-31 07:59:28.715892503 +0000 UTC m=+0.021996987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:28 compute-0 podman[92658]: 2026-01-31 07:59:28.82189866 +0000 UTC m=+0.128003074 container create 5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b (image=quay.io/ceph/ceph:v20, name=gallant_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 07:59:28 compute-0 systemd[1]: Started libpod-conmon-5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b.scope.
Jan 31 07:59:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a747561284d124504924b09cdc1946b5c634ac51ce3279aac594eb29dd021/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a747561284d124504924b09cdc1946b5c634ac51ce3279aac594eb29dd021/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a747561284d124504924b09cdc1946b5c634ac51ce3279aac594eb29dd021/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:28 compute-0 podman[92658]: 2026-01-31 07:59:28.948162786 +0000 UTC m=+0.254267200 container init 5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b (image=quay.io/ceph/ceph:v20, name=gallant_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:28 compute-0 podman[92658]: 2026-01-31 07:59:28.953147542 +0000 UTC m=+0.259251936 container start 5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b (image=quay.io/ceph/ceph:v20, name=gallant_carson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:59:29 compute-0 podman[92658]: 2026-01-31 07:59:29.113262787 +0000 UTC m=+0.419367201 container attach 5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b (image=quay.io/ceph/ceph:v20, name=gallant_carson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 07:59:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/590085785' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 07:59:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/590085785' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:59:29 compute-0 gallant_carson[92673]: 
Jan 31 07:59:29 compute-0 gallant_carson[92673]: [global]
Jan 31 07:59:29 compute-0 gallant_carson[92673]:         fsid = dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:59:29 compute-0 gallant_carson[92673]:         mon_host = 192.168.122.100
Jan 31 07:59:29 compute-0 gallant_carson[92673]:         rgw_keystone_api_version = 3
Jan 31 07:59:29 compute-0 systemd[1]: libpod-5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b.scope: Deactivated successfully.
Jan 31 07:59:29 compute-0 podman[92658]: 2026-01-31 07:59:29.458489446 +0000 UTC m=+0.764593860 container died 5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b (image=quay.io/ceph/ceph:v20, name=gallant_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 07:59:29 compute-0 sudo[92698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:29 compute-0 sudo[92698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:29 compute-0 sudo[92698]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:29 compute-0 sudo[92734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 07:59:29 compute-0 sudo[92734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d86a747561284d124504924b09cdc1946b5c634ac51ce3279aac594eb29dd021-merged.mount: Deactivated successfully.
Jan 31 07:59:29 compute-0 podman[92658]: 2026-01-31 07:59:29.807352232 +0000 UTC m=+1.113456626 container remove 5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b (image=quay.io/ceph/ceph:v20, name=gallant_carson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:29 compute-0 systemd[1]: libpod-conmon-5a6ba633cd54777f7e5701e96ef9fe0b1e9aee0bea5dfd1f95ece715545e015b.scope: Deactivated successfully.
Jan 31 07:59:29 compute-0 sudo[92655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:29 compute-0 sudo[92842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuiizkbbcltxggprpebnsjidbywxxlnr ; /usr/bin/python3'
Jan 31 07:59:29 compute-0 sudo[92842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:29 compute-0 ceph-mon[75294]: pgmap v105: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:29 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/590085785' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 07:59:29 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/590085785' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:59:30 compute-0 python3[92844]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:30 compute-0 podman[92805]: 2026-01-31 07:59:30.201772546 +0000 UTC m=+0.314944638 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:30 compute-0 podman[92845]: 2026-01-31 07:59:30.191687693 +0000 UTC m=+0.102090242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:30 compute-0 podman[92845]: 2026-01-31 07:59:30.43592649 +0000 UTC m=+0.346329019 container create 2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d (image=quay.io/ceph/ceph:v20, name=heuristic_cannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:30 compute-0 systemd[1]: Started libpod-conmon-2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d.scope.
Jan 31 07:59:30 compute-0 podman[92805]: 2026-01-31 07:59:30.717881141 +0000 UTC m=+0.831053233 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639720b780d53680747fd90ea5ef6f9a9ab591dbd007a929bb3ea2ee27d5b283/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639720b780d53680747fd90ea5ef6f9a9ab591dbd007a929bb3ea2ee27d5b283/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639720b780d53680747fd90ea5ef6f9a9ab591dbd007a929bb3ea2ee27d5b283/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v106: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:31 compute-0 ceph-mon[75294]: pgmap v106: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:32 compute-0 podman[92845]: 2026-01-31 07:59:32.118052537 +0000 UTC m=+2.028455086 container init 2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d (image=quay.io/ceph/ceph:v20, name=heuristic_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:32 compute-0 podman[92845]: 2026-01-31 07:59:32.122334333 +0000 UTC m=+2.032736862 container start 2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d (image=quay.io/ceph/ceph:v20, name=heuristic_cannon, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:59:32 compute-0 podman[92845]: 2026-01-31 07:59:32.361247837 +0000 UTC m=+2.271650456 container attach 2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d (image=quay.io/ceph/ceph:v20, name=heuristic_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 31 07:59:32 compute-0 sudo[92734]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:59:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/732270079' entity='client.admin' 
Jan 31 07:59:32 compute-0 heuristic_cannon[92878]: set ssl_option
Jan 31 07:59:32 compute-0 systemd[1]: libpod-2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d.scope: Deactivated successfully.
Jan 31 07:59:32 compute-0 podman[92845]: 2026-01-31 07:59:32.943080216 +0000 UTC m=+2.853482755 container died 2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d (image=quay.io/ceph/ceph:v20, name=heuristic_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:59:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:59:33 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 07:59:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:59:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 07:59:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-639720b780d53680747fd90ea5ef6f9a9ab591dbd007a929bb3ea2ee27d5b283-merged.mount: Deactivated successfully.
Jan 31 07:59:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 07:59:33 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:59:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 07:59:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:59:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:59:33 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:33 compute-0 sudo[93030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:33 compute-0 sudo[93030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:33 compute-0 sudo[93030]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:33 compute-0 sudo[93055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 07:59:33 compute-0 sudo[93055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:33 compute-0 podman[92845]: 2026-01-31 07:59:33.747831554 +0000 UTC m=+3.658234103 container remove 2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d (image=quay.io/ceph/ceph:v20, name=heuristic_cannon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:33 compute-0 sudo[92842]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:33 compute-0 systemd[1]: libpod-conmon-2cb4375616d19cd67cf17d372ccca1e4852347d91888234118b3f98d0c87c34d.scope: Deactivated successfully.
Jan 31 07:59:33 compute-0 sudo[93129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfykvcdmbckaxbivxjpxdcpskhthfdej ; /usr/bin/python3'
Jan 31 07:59:33 compute-0 sudo[93129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:33 compute-0 podman[93092]: 2026-01-31 07:59:33.795858078 +0000 UTC m=+0.018321738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:33 compute-0 podman[93092]: 2026-01-31 07:59:33.957831993 +0000 UTC m=+0.180295573 container create 574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_edison, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:34 compute-0 python3[93131]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:34 compute-0 ceph-mon[75294]: pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/732270079' entity='client.admin' 
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:59:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:34 compute-0 systemd[1]: Started libpod-conmon-574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02.scope.
Jan 31 07:59:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:34 compute-0 podman[93132]: 2026-01-31 07:59:34.128916576 +0000 UTC m=+0.038037444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:34 compute-0 podman[93132]: 2026-01-31 07:59:34.336557501 +0000 UTC m=+0.245678319 container create b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54 (image=quay.io/ceph/ceph:v20, name=adoring_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 07:59:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:34 compute-0 systemd[1]: Started libpod-conmon-b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54.scope.
Jan 31 07:59:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78589332ec18e329ed01024276b47832c6e06ebfd64235c4b35bb1fa7e7c12fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78589332ec18e329ed01024276b47832c6e06ebfd64235c4b35bb1fa7e7c12fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78589332ec18e329ed01024276b47832c6e06ebfd64235c4b35bb1fa7e7c12fa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v108: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:35 compute-0 podman[93092]: 2026-01-31 07:59:35.05323799 +0000 UTC m=+1.275701660 container init 574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_edison, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:59:35 compute-0 podman[93092]: 2026-01-31 07:59:35.060116696 +0000 UTC m=+1.282580306 container start 574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_edison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 07:59:35 compute-0 wonderful_edison[93146]: 167 167
Jan 31 07:59:35 compute-0 systemd[1]: libpod-574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02.scope: Deactivated successfully.
Jan 31 07:59:35 compute-0 podman[93092]: 2026-01-31 07:59:35.860012903 +0000 UTC m=+2.082476523 container attach 574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_edison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:35 compute-0 podman[93092]: 2026-01-31 07:59:35.860585868 +0000 UTC m=+2.083049488 container died 574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_edison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:35 compute-0 ceph-mon[75294]: pgmap v108: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-acb9a8427e63159a8959809901cd3ab24d50bcc3f5c698201f6b6c00fa0ad435-merged.mount: Deactivated successfully.
Jan 31 07:59:36 compute-0 podman[93092]: 2026-01-31 07:59:36.311276149 +0000 UTC m=+2.533739769 container remove 574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:59:36 compute-0 podman[93132]: 2026-01-31 07:59:36.458688259 +0000 UTC m=+2.367809097 container init b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54 (image=quay.io/ceph/ceph:v20, name=adoring_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:36 compute-0 podman[93132]: 2026-01-31 07:59:36.464546349 +0000 UTC m=+2.373667167 container start b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54 (image=quay.io/ceph/ceph:v20, name=adoring_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 31 07:59:36 compute-0 systemd[1]: libpod-conmon-574ef65bd6afb4d6e4f0ab5a794bbb5ddfbafd9a448cec941f0c291f6ab78a02.scope: Deactivated successfully.
Jan 31 07:59:36 compute-0 podman[93132]: 2026-01-31 07:59:36.547441998 +0000 UTC m=+2.456562836 container attach b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54 (image=quay.io/ceph/ceph:v20, name=adoring_elbakyan, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:36 compute-0 podman[93177]: 2026-01-31 07:59:36.581578554 +0000 UTC m=+0.184591920 container create d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dubinsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:36 compute-0 podman[93177]: 2026-01-31 07:59:36.555037894 +0000 UTC m=+0.158051240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:36 compute-0 systemd[1]: Started libpod-conmon-d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3.scope.
Jan 31 07:59:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b17b2f7e040310bbd3694ebffde40b96dcccae995099b0a05791bef689cae8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b17b2f7e040310bbd3694ebffde40b96dcccae995099b0a05791bef689cae8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b17b2f7e040310bbd3694ebffde40b96dcccae995099b0a05791bef689cae8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b17b2f7e040310bbd3694ebffde40b96dcccae995099b0a05791bef689cae8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b17b2f7e040310bbd3694ebffde40b96dcccae995099b0a05791bef689cae8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:36 compute-0 podman[93177]: 2026-01-31 07:59:36.798606454 +0000 UTC m=+0.401619790 container init d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:59:36 compute-0 podman[93177]: 2026-01-31 07:59:36.807559047 +0000 UTC m=+0.410572373 container start d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 07:59:36 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:59:36 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 31 07:59:36 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 07:59:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 07:59:36 compute-0 podman[93177]: 2026-01-31 07:59:36.99670168 +0000 UTC m=+0.599715006 container attach d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dubinsky, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:59:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:37 compute-0 adoring_elbakyan[93152]: Scheduled rgw.rgw update...
Jan 31 07:59:37 compute-0 systemd[1]: libpod-b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54.scope: Deactivated successfully.
Jan 31 07:59:37 compute-0 podman[93132]: 2026-01-31 07:59:37.024164165 +0000 UTC m=+2.933284983 container died b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54 (image=quay.io/ceph/ceph:v20, name=adoring_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:37 compute-0 busy_dubinsky[93213]: --> passed data devices: 0 physical, 3 LVM
Jan 31 07:59:37 compute-0 busy_dubinsky[93213]: --> All data devices are unavailable
Jan 31 07:59:37 compute-0 systemd[1]: libpod-d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3.scope: Deactivated successfully.
Jan 31 07:59:37 compute-0 ceph-mon[75294]: pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:37 compute-0 ceph-mon[75294]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:59:37 compute-0 ceph-mon[75294]: Saving service rgw.rgw spec with placement compute-0
Jan 31 07:59:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-78589332ec18e329ed01024276b47832c6e06ebfd64235c4b35bb1fa7e7c12fa-merged.mount: Deactivated successfully.
Jan 31 07:59:37 compute-0 podman[93132]: 2026-01-31 07:59:37.584923423 +0000 UTC m=+3.494044281 container remove b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54 (image=quay.io/ceph/ceph:v20, name=adoring_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:59:37 compute-0 systemd[1]: libpod-conmon-b6bca0fb5a40c156d79cfa1e7ed4f41655744171b41a0f2b22556f369a886b54.scope: Deactivated successfully.
Jan 31 07:59:37 compute-0 sudo[93129]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:37 compute-0 podman[93177]: 2026-01-31 07:59:37.625230657 +0000 UTC m=+1.228243983 container died d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-46b17b2f7e040310bbd3694ebffde40b96dcccae995099b0a05791bef689cae8-merged.mount: Deactivated successfully.
Jan 31 07:59:37 compute-0 podman[93177]: 2026-01-31 07:59:37.978547924 +0000 UTC m=+1.581561250 container remove d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 07:59:37 compute-0 systemd[1]: libpod-conmon-d4956b740b2849b152f7b96ecd488760aafbd7eefce1412fe1db0855071d6ca3.scope: Deactivated successfully.
Jan 31 07:59:38 compute-0 sudo[93055]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:38 compute-0 sudo[93259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:38 compute-0 sudo[93259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:38 compute-0 sudo[93259]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:38 compute-0 sudo[93284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 07:59:38 compute-0 sudo[93284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:38 compute-0 podman[93332]: 2026-01-31 07:59:38.362583835 +0000 UTC m=+0.063609756 container create b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_burnell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:38 compute-0 podman[93332]: 2026-01-31 07:59:38.317885332 +0000 UTC m=+0.018911273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:38 compute-0 systemd[1]: Started libpod-conmon-b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66.scope.
Jan 31 07:59:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:38 compute-0 podman[93332]: 2026-01-31 07:59:38.574503877 +0000 UTC m=+0.275529818 container init b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_burnell, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:59:38 compute-0 podman[93332]: 2026-01-31 07:59:38.582952206 +0000 UTC m=+0.283978117 container start b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_burnell, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:38 compute-0 romantic_burnell[93413]: 167 167
Jan 31 07:59:38 compute-0 systemd[1]: libpod-b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66.scope: Deactivated successfully.
Jan 31 07:59:38 compute-0 conmon[93413]: conmon b5b521c8e1d9eff48b90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66.scope/container/memory.events
Jan 31 07:59:38 compute-0 python3[93412]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:59:38 compute-0 podman[93332]: 2026-01-31 07:59:38.735712772 +0000 UTC m=+0.436738753 container attach b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:59:38 compute-0 podman[93332]: 2026-01-31 07:59:38.73641813 +0000 UTC m=+0.437444061 container died b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_burnell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 07:59:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v110: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:38 compute-0 python3[93499]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846378.300162-36959-249062356761249/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:59:39 compute-0 ceph-mon[75294]: pgmap v110: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb124d4bdb6a61fbf7c3de3908d1ca986f20d6b0b23a69d591c88f7c299d934d-merged.mount: Deactivated successfully.
Jan 31 07:59:39 compute-0 sudo[93548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvohmosrcnjlhrlyhdrjqshnrfbvgodq ; /usr/bin/python3'
Jan 31 07:59:39 compute-0 sudo[93548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:39 compute-0 python3[93550]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:39 compute-0 podman[93332]: 2026-01-31 07:59:39.496499587 +0000 UTC m=+1.197525508 container remove b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:39 compute-0 podman[93551]: 2026-01-31 07:59:39.512685156 +0000 UTC m=+0.086343654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:39 compute-0 podman[93551]: 2026-01-31 07:59:39.613690967 +0000 UTC m=+0.187349415 container create c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f (image=quay.io/ceph/ceph:v20, name=hungry_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 07:59:39 compute-0 systemd[1]: Started libpod-conmon-c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f.scope.
Jan 31 07:59:39 compute-0 systemd[1]: libpod-conmon-b5b521c8e1d9eff48b907623fc3d328455d1a942f8a22e42b89ac3145c135c66.scope: Deactivated successfully.
Jan 31 07:59:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805f54c6d26b40a5de8eaabb55f0e6644af9d448bfabf961bea9c5287e34609b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805f54c6d26b40a5de8eaabb55f0e6644af9d448bfabf961bea9c5287e34609b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805f54c6d26b40a5de8eaabb55f0e6644af9d448bfabf961bea9c5287e34609b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:39 compute-0 podman[93571]: 2026-01-31 07:59:39.870794143 +0000 UTC m=+0.291723647 container create fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:59:39 compute-0 podman[93571]: 2026-01-31 07:59:39.77591741 +0000 UTC m=+0.196846954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:40 compute-0 systemd[1]: Started libpod-conmon-fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9.scope.
Jan 31 07:59:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f377aefa7da533a1ccc77687f325558964ca5b4c8e35546cb78e0482eed5dea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f377aefa7da533a1ccc77687f325558964ca5b4c8e35546cb78e0482eed5dea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f377aefa7da533a1ccc77687f325558964ca5b4c8e35546cb78e0482eed5dea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f377aefa7da533a1ccc77687f325558964ca5b4c8e35546cb78e0482eed5dea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 podman[93551]: 2026-01-31 07:59:40.100089546 +0000 UTC m=+0.673748024 container init c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f (image=quay.io/ceph/ceph:v20, name=hungry_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 07:59:40 compute-0 podman[93551]: 2026-01-31 07:59:40.11200703 +0000 UTC m=+0.685665508 container start c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f (image=quay.io/ceph/ceph:v20, name=hungry_kalam, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:40 compute-0 podman[93571]: 2026-01-31 07:59:40.146644909 +0000 UTC m=+0.567574463 container init fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:59:40 compute-0 podman[93571]: 2026-01-31 07:59:40.151852661 +0000 UTC m=+0.572782165 container start fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:59:40 compute-0 podman[93571]: 2026-01-31 07:59:40.25685219 +0000 UTC m=+0.677781704 container attach fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]: {
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:     "0": [
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:         {
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "devices": [
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "/dev/loop3"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             ],
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_name": "ceph_lv0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_size": "21470642176",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "name": "ceph_lv0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "tags": {
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cluster_name": "ceph",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.crush_device_class": "",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.encrypted": "0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.objectstore": "bluestore",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osd_id": "0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.type": "block",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.vdo": "0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.with_tpm": "0"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             },
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "type": "block",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "vg_name": "ceph_vg0"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:         }
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:     ],
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:     "1": [
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:         {
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "devices": [
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "/dev/loop4"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             ],
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_name": "ceph_lv1",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_size": "21470642176",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "name": "ceph_lv1",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "tags": {
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cluster_name": "ceph",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.crush_device_class": "",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.encrypted": "0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.objectstore": "bluestore",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osd_id": "1",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.type": "block",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.vdo": "0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.with_tpm": "0"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             },
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "type": "block",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "vg_name": "ceph_vg1"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:         }
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:     ],
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:     "2": [
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:         {
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "devices": [
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "/dev/loop5"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             ],
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_name": "ceph_lv2",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_size": "21470642176",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "name": "ceph_lv2",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "tags": {
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.cluster_name": "ceph",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.crush_device_class": "",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.encrypted": "0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.objectstore": "bluestore",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osd_id": "2",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.type": "block",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.vdo": "0",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:                 "ceph.with_tpm": "0"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             },
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "type": "block",
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:             "vg_name": "ceph_vg2"
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:         }
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]:     ]
Jan 31 07:59:40 compute-0 optimistic_mayer[93593]: }
Jan 31 07:59:40 compute-0 systemd[1]: libpod-fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9.scope: Deactivated successfully.
Jan 31 07:59:40 compute-0 podman[93551]: 2026-01-31 07:59:40.535865802 +0000 UTC m=+1.109524280 container attach c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f (image=quay.io/ceph/ceph:v20, name=hungry_kalam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:40 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:59:40 compute-0 ceph-mgr[75591]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 07:59:40 compute-0 podman[93571]: 2026-01-31 07:59:40.548101794 +0000 UTC m=+0.969031298 container died fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 07:59:40 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0[75290]: 2026-01-31T07:59:40.548+0000 7f3e2bd8a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e2 new map
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-31T07:59:40:548797+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:59:40.548267+0000
                                           modified        2026-01-31T07:59:40.548268+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 31 07:59:40 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 07:59:40 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 07:59:40 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 07:59:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 07:59:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v112: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 07:59:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 07:59:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 07:59:40 compute-0 ceph-mon[75294]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 07:59:40 compute-0 ceph-mon[75294]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 07:59:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f377aefa7da533a1ccc77687f325558964ca5b4c8e35546cb78e0482eed5dea-merged.mount: Deactivated successfully.
Jan 31 07:59:41 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:41 compute-0 ceph-mgr[75591]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 07:59:41 compute-0 systemd[1]: libpod-c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f.scope: Deactivated successfully.
Jan 31 07:59:41 compute-0 podman[93571]: 2026-01-31 07:59:41.230644996 +0000 UTC m=+1.651574520 container remove fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:59:41 compute-0 systemd[1]: libpod-conmon-fe63ac06d6ee34a8748b121e8ec1cf2f07c4d5684b24c1123fedea60f2573ff9.scope: Deactivated successfully.
Jan 31 07:59:41 compute-0 podman[93551]: 2026-01-31 07:59:41.247174424 +0000 UTC m=+1.820832872 container died c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f (image=quay.io/ceph/ceph:v20, name=hungry_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:59:41 compute-0 sudo[93284]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:41 compute-0 sudo[93647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:41 compute-0 sudo[93647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:41 compute-0 sudo[93647]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:41 compute-0 sudo[93672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 07:59:41 compute-0 sudo[93672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-805f54c6d26b40a5de8eaabb55f0e6644af9d448bfabf961bea9c5287e34609b-merged.mount: Deactivated successfully.
Jan 31 07:59:41 compute-0 podman[93636]: 2026-01-31 07:59:41.522856475 +0000 UTC m=+0.344036536 container remove c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f (image=quay.io/ceph/ceph:v20, name=hungry_kalam, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:41 compute-0 systemd[1]: libpod-conmon-c8311df71eb775b0dc926cb1d7eee6494c73313572cdadde6f5a6b5106b48e9f.scope: Deactivated successfully.
Jan 31 07:59:41 compute-0 sudo[93548]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:41 compute-0 sudo[93751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mllzlfongfultkeuxkcdvoluzvnzbofl ; /usr/bin/python3'
Jan 31 07:59:41 compute-0 sudo[93751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:41 compute-0 podman[93714]: 2026-01-31 07:59:41.611671276 +0000 UTC m=+0.020108827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:41 compute-0 python3[93753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:41 compute-0 podman[93714]: 2026-01-31 07:59:41.893013171 +0000 UTC m=+0.301450702 container create 57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:59:41 compute-0 systemd[1]: Started libpod-conmon-57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383.scope.
Jan 31 07:59:41 compute-0 ceph-mon[75294]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:59:41 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 07:59:41 compute-0 ceph-mon[75294]: osdmap e36: 3 total, 3 up, 3 in
Jan 31 07:59:41 compute-0 ceph-mon[75294]: fsmap cephfs:0
Jan 31 07:59:41 compute-0 ceph-mon[75294]: Saving service mds.cephfs spec with placement compute-0
Jan 31 07:59:41 compute-0 ceph-mon[75294]: pgmap v112: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:41 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:41 compute-0 podman[93754]: 2026-01-31 07:59:41.995215244 +0000 UTC m=+0.165813881 container create 8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec (image=quay.io/ceph/ceph:v20, name=boring_joliot, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 07:59:41 compute-0 podman[93754]: 2026-01-31 07:59:41.906161267 +0000 UTC m=+0.076759924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:42 compute-0 podman[93714]: 2026-01-31 07:59:42.112378403 +0000 UTC m=+0.520815954 container init 57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:59:42 compute-0 podman[93714]: 2026-01-31 07:59:42.117157503 +0000 UTC m=+0.525595054 container start 57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lichterman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:59:42 compute-0 eloquent_lichterman[93769]: 167 167
Jan 31 07:59:42 compute-0 systemd[1]: Started libpod-conmon-8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec.scope.
Jan 31 07:59:42 compute-0 systemd[1]: libpod-57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383.scope: Deactivated successfully.
Jan 31 07:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da44b6ddbb3ab2a593060f68ba4a5c0dc2f2818d8c3b418dabe392a130bb501/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da44b6ddbb3ab2a593060f68ba4a5c0dc2f2818d8c3b418dabe392a130bb501/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da44b6ddbb3ab2a593060f68ba4a5c0dc2f2818d8c3b418dabe392a130bb501/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:42 compute-0 podman[93714]: 2026-01-31 07:59:42.155161625 +0000 UTC m=+0.563599156 container attach 57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 07:59:42 compute-0 podman[93714]: 2026-01-31 07:59:42.155510134 +0000 UTC m=+0.563947665 container died 57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 07:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-468693705d2b218ff89ec7c097beac3f373286dd37c2a40ca6f63819fc902337-merged.mount: Deactivated successfully.
Jan 31 07:59:42 compute-0 podman[93714]: 2026-01-31 07:59:42.523942802 +0000 UTC m=+0.932380333 container remove 57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 07:59:42 compute-0 podman[93754]: 2026-01-31 07:59:42.748235699 +0000 UTC m=+0.918834356 container init 8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec (image=quay.io/ceph/ceph:v20, name=boring_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:42 compute-0 podman[93754]: 2026-01-31 07:59:42.753265665 +0000 UTC m=+0.923864292 container start 8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec (image=quay.io/ceph/ceph:v20, name=boring_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v113: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:42 compute-0 podman[93754]: 2026-01-31 07:59:42.816873602 +0000 UTC m=+0.987472249 container attach 8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec (image=quay.io/ceph/ceph:v20, name=boring_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:59:42 compute-0 podman[93799]: 2026-01-31 07:59:42.839267249 +0000 UTC m=+0.237751473 container create 6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ramanujan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:59:42 compute-0 podman[93799]: 2026-01-31 07:59:42.7600975 +0000 UTC m=+0.158581764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:43 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:59:43 compute-0 ceph-mgr[75591]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 07:59:43 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 07:59:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 07:59:43 compute-0 systemd[1]: Started libpod-conmon-6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a.scope.
Jan 31 07:59:43 compute-0 systemd[1]: libpod-conmon-57e781d401c9d418b6250059e2cc5d492df31ca854d3fa74b0c1c2f676f1a383.scope: Deactivated successfully.
Jan 31 07:59:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c044433d79d87abc2c754674a01c188507f5e53a3308cae5459b38c9452ed697/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c044433d79d87abc2c754674a01c188507f5e53a3308cae5459b38c9452ed697/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c044433d79d87abc2c754674a01c188507f5e53a3308cae5459b38c9452ed697/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c044433d79d87abc2c754674a01c188507f5e53a3308cae5459b38c9452ed697/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:43 compute-0 ceph-mon[75294]: pgmap v113: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:43 compute-0 podman[93799]: 2026-01-31 07:59:43.351031257 +0000 UTC m=+0.749515501 container init 6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ramanujan, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 07:59:43 compute-0 podman[93799]: 2026-01-31 07:59:43.356540716 +0000 UTC m=+0.755024940 container start 6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ramanujan, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:43 compute-0 podman[93799]: 2026-01-31 07:59:43.372599621 +0000 UTC m=+0.771083855 container attach 6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ramanujan, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:59:43 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:43 compute-0 boring_joliot[93777]: Scheduled mds.cephfs update...
Jan 31 07:59:43 compute-0 systemd[1]: libpod-8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec.scope: Deactivated successfully.
Jan 31 07:59:43 compute-0 podman[93754]: 2026-01-31 07:59:43.394832885 +0000 UTC m=+1.565431522 container died 8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec (image=quay.io/ceph/ceph:v20, name=boring_joliot, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8da44b6ddbb3ab2a593060f68ba4a5c0dc2f2818d8c3b418dabe392a130bb501-merged.mount: Deactivated successfully.
Jan 31 07:59:43 compute-0 podman[93754]: 2026-01-31 07:59:43.749233502 +0000 UTC m=+1.919832139 container remove 8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec (image=quay.io/ceph/ceph:v20, name=boring_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:43 compute-0 systemd[1]: libpod-conmon-8556b4fd26225f2e8d68a28f4790e29f3dd4cae8d7cb75c490e8a5a1b32fd0ec.scope: Deactivated successfully.
Jan 31 07:59:43 compute-0 sudo[93751]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:43 compute-0 lvm[93926]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:59:43 compute-0 lvm[93929]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:59:43 compute-0 lvm[93926]: VG ceph_vg0 finished
Jan 31 07:59:43 compute-0 lvm[93929]: VG ceph_vg1 finished
Jan 31 07:59:43 compute-0 lvm[93931]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:59:43 compute-0 lvm[93931]: VG ceph_vg2 finished
Jan 31 07:59:44 compute-0 eloquent_ramanujan[93837]: {}
Jan 31 07:59:44 compute-0 systemd[1]: libpod-6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a.scope: Deactivated successfully.
Jan 31 07:59:44 compute-0 podman[93799]: 2026-01-31 07:59:44.054454025 +0000 UTC m=+1.452938269 container died 6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c044433d79d87abc2c754674a01c188507f5e53a3308cae5459b38c9452ed697-merged.mount: Deactivated successfully.
Jan 31 07:59:44 compute-0 podman[93799]: 2026-01-31 07:59:44.361496827 +0000 UTC m=+1.759981051 container remove 6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:44 compute-0 systemd[1]: libpod-conmon-6992bc06b51d14b38ff2264eccff7a6a43ec0785e6a996edc9e785220bad801a.scope: Deactivated successfully.
Jan 31 07:59:44 compute-0 sudo[93672]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:59:44 compute-0 ceph-mon[75294]: from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:59:44 compute-0 ceph-mon[75294]: Saving service mds.cephfs spec with placement compute-0
Jan 31 07:59:44 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:44 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:59:44 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:44 compute-0 sudo[93992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:59:44 compute-0 sudo[93992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:44 compute-0 sudo[93992]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:44 compute-0 sudo[94060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jluhwqtjwjlpfpzgcexiksntrmhuiknt ; /usr/bin/python3'
Jan 31 07:59:44 compute-0 sudo[94060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:44 compute-0 sudo[94035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:44 compute-0 sudo[94035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:44 compute-0 sudo[94035]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:44 compute-0 sudo[94074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 07:59:44 compute-0 sudo[94074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v114: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:44 compute-0 python3[94072]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:59:44 compute-0 sudo[94060]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:44 compute-0 sudo[94183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvhjxgcwoyagjintyfsrbcfddaoqotua ; /usr/bin/python3'
Jan 31 07:59:44 compute-0 sudo[94183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:45 compute-0 python3[94186]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846384.5621574-37007-191005956221264/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=99483649f550cfa7541e42c2cedbbe9e650453a5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:59:45 compute-0 sudo[94183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:45 compute-0 podman[94217]: 2026-01-31 07:59:45.207949197 +0000 UTC m=+0.140356540 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:59:45 compute-0 podman[94261]: 2026-01-31 07:59:45.35397684 +0000 UTC m=+0.053059871 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:59:45 compute-0 podman[94217]: 2026-01-31 07:59:45.371973108 +0000 UTC m=+0.304380441 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 31 07:59:45 compute-0 sudo[94323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbgvihcckuztibxuvctzdchowkeyjwer ; /usr/bin/python3'
Jan 31 07:59:45 compute-0 sudo[94323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:45 compute-0 python3[94326]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:45 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:45 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:45 compute-0 ceph-mon[75294]: pgmap v114: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:45 compute-0 podman[94360]: 2026-01-31 07:59:45.656979923 +0000 UTC m=+0.015512032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:45 compute-0 podman[94360]: 2026-01-31 07:59:45.896088851 +0000 UTC m=+0.254620920 container create 86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c (image=quay.io/ceph/ceph:v20, name=distracted_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:59:46 compute-0 systemd[1]: Started libpod-conmon-86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c.scope.
Jan 31 07:59:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b075491936e4f31bf05cf7d45fd24225dd2453ba5d4e4bef1be63e2d7006587/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b075491936e4f31bf05cf7d45fd24225dd2453ba5d4e4bef1be63e2d7006587/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:46 compute-0 podman[94360]: 2026-01-31 07:59:46.596340133 +0000 UTC m=+0.954872242 container init 86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c (image=quay.io/ceph/ceph:v20, name=distracted_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:46 compute-0 podman[94360]: 2026-01-31 07:59:46.601668638 +0000 UTC m=+0.960200717 container start 86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c (image=quay.io/ceph/ceph:v20, name=distracted_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v115: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:46 compute-0 podman[94360]: 2026-01-31 07:59:46.780342456 +0000 UTC m=+1.138874535 container attach 86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c (image=quay.io/ceph/ceph:v20, name=distracted_shirley, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:59:46 compute-0 sudo[94074]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:59:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 31 07:59:47 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3487520200' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 07:59:47 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:59:47 compute-0 ceph-mon[75294]: pgmap v115: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:47 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3487520200' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3487520200' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 07:59:48 compute-0 systemd[1]: libpod-86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c.scope: Deactivated successfully.
Jan 31 07:59:48 compute-0 podman[94360]: 2026-01-31 07:59:48.143348745 +0000 UTC m=+2.501880834 container died 86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c (image=quay.io/ceph/ceph:v20, name=distracted_shirley, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:48 compute-0 sudo[94466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:48 compute-0 sudo[94466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:48 compute-0 sudo[94466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:48 compute-0 sudo[94491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 07:59:48 compute-0 sudo[94491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b075491936e4f31bf05cf7d45fd24225dd2453ba5d4e4bef1be63e2d7006587-merged.mount: Deactivated successfully.
Jan 31 07:59:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v116: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:48 compute-0 sudo[94491]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 07:59:48 compute-0 podman[94360]: 2026-01-31 07:59:48.896119352 +0000 UTC m=+3.254651431 container remove 86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c (image=quay.io/ceph/ceph:v20, name=distracted_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:59:48 compute-0 systemd[1]: libpod-conmon-86fe4e9eb50e3110cc5d25c07e56d4c5b65840b01d7161c0540c2798d771e72c.scope: Deactivated successfully.
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:59:48 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:48 compute-0 sudo[94323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:48 compute-0 sudo[94550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:48 compute-0 sudo[94550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:48 compute-0 sudo[94550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3487520200' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:48 compute-0 ceph-mon[75294]: pgmap v116: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 07:59:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:49 compute-0 sudo[94575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 07:59:49 compute-0 sudo[94575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:49 compute-0 podman[94613]: 2026-01-31 07:59:49.30506066 +0000 UTC m=+0.088352449 container create 78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_wilbur, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:59:49 compute-0 podman[94613]: 2026-01-31 07:59:49.240736065 +0000 UTC m=+0.024027864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:49 compute-0 systemd[1]: Started libpod-conmon-78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8.scope.
Jan 31 07:59:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:49 compute-0 sudo[94655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqqkbfxqedawohvoipodaoiykvnbzyyv ; /usr/bin/python3'
Jan 31 07:59:49 compute-0 sudo[94655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:49 compute-0 podman[94613]: 2026-01-31 07:59:49.496016142 +0000 UTC m=+0.279307971 container init 78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_wilbur, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:49 compute-0 podman[94613]: 2026-01-31 07:59:49.501353177 +0000 UTC m=+0.284644966 container start 78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:49 compute-0 loving_wilbur[94645]: 167 167
Jan 31 07:59:49 compute-0 systemd[1]: libpod-78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8.scope: Deactivated successfully.
Jan 31 07:59:49 compute-0 podman[94613]: 2026-01-31 07:59:49.525759369 +0000 UTC m=+0.309051188 container attach 78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_wilbur, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:49 compute-0 podman[94613]: 2026-01-31 07:59:49.526415757 +0000 UTC m=+0.309707546 container died 78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:49 compute-0 python3[94657]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-93e7b1f7a072e5701c4118befe24fada08748600d203a239875c6aa85a845955-merged.mount: Deactivated successfully.
Jan 31 07:59:49 compute-0 podman[94613]: 2026-01-31 07:59:49.73694673 +0000 UTC m=+0.520238559 container remove 78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_wilbur, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:49 compute-0 podman[94672]: 2026-01-31 07:59:49.769935416 +0000 UTC m=+0.163402596 container create 15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60 (image=quay.io/ceph/ceph:v20, name=elastic_feistel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 07:59:49 compute-0 systemd[1]: Started libpod-conmon-15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60.scope.
Jan 31 07:59:49 compute-0 podman[94672]: 2026-01-31 07:59:49.7343597 +0000 UTC m=+0.127826910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7a292df450860c53162134834c6062b8d31f83aa80d93e9ae66caf6addb2ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7a292df450860c53162134834c6062b8d31f83aa80d93e9ae66caf6addb2ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:49 compute-0 podman[94697]: 2026-01-31 07:59:49.848317753 +0000 UTC m=+0.035614418 container create 4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:49 compute-0 podman[94672]: 2026-01-31 07:59:49.8592886 +0000 UTC m=+0.252756180 container init 15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60 (image=quay.io/ceph/ceph:v20, name=elastic_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 07:59:49 compute-0 podman[94672]: 2026-01-31 07:59:49.865220331 +0000 UTC m=+0.258687511 container start 15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60 (image=quay.io/ceph/ceph:v20, name=elastic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:49 compute-0 podman[94672]: 2026-01-31 07:59:49.873603229 +0000 UTC m=+0.267070419 container attach 15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60 (image=quay.io/ceph/ceph:v20, name=elastic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 07:59:49 compute-0 systemd[1]: Started libpod-conmon-4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318.scope.
Jan 31 07:59:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:49 compute-0 podman[94697]: 2026-01-31 07:59:49.830802007 +0000 UTC m=+0.018098702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85af5441f0aa4568432479eb853947e91e945e2554121e21ed42dd37cf9da731/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85af5441f0aa4568432479eb853947e91e945e2554121e21ed42dd37cf9da731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85af5441f0aa4568432479eb853947e91e945e2554121e21ed42dd37cf9da731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85af5441f0aa4568432479eb853947e91e945e2554121e21ed42dd37cf9da731/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85af5441f0aa4568432479eb853947e91e945e2554121e21ed42dd37cf9da731/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:49 compute-0 podman[94697]: 2026-01-31 07:59:49.94480784 +0000 UTC m=+0.132104525 container init 4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:49 compute-0 podman[94697]: 2026-01-31 07:59:49.95327806 +0000 UTC m=+0.140574725 container start 4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:59:49 compute-0 podman[94697]: 2026-01-31 07:59:49.957869325 +0000 UTC m=+0.145165990 container attach 4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:59:49 compute-0 systemd[1]: libpod-conmon-78d6c3a2cc0c9a1200ca0056e6ce7c9d0fa1fd536c509c3b1dfd110155ac9cd8.scope: Deactivated successfully.
Jan 31 07:59:50 compute-0 exciting_newton[94718]: --> passed data devices: 0 physical, 3 LVM
Jan 31 07:59:50 compute-0 exciting_newton[94718]: --> All data devices are unavailable
Jan 31 07:59:50 compute-0 systemd[1]: libpod-4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318.scope: Deactivated successfully.
Jan 31 07:59:50 compute-0 podman[94697]: 2026-01-31 07:59:50.383236009 +0000 UTC m=+0.570532674 container died 4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 07:59:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 07:59:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064801233' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:59:50 compute-0 elastic_feistel[94704]: 
Jan 31 07:59:50 compute-0 elastic_feistel[94704]: {"fsid":"dc03f344-536f-5591-add9-31059f42637c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":201,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":3,"osd_up_since":1769846318,"num_in_osds":3,"osd_in_since":1769846291,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84008960,"bytes_avail":64327917568,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-31T07:59:40:548797+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":6,"modified":"2026-01-31T07:59:38.765661+0000","services":{}},"progress_events":{}}
Jan 31 07:59:50 compute-0 systemd[1]: libpod-15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60.scope: Deactivated successfully.
Jan 31 07:59:50 compute-0 conmon[94704]: conmon 15c9fbb09e856d97b494 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60.scope/container/memory.events
Jan 31 07:59:50 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3064801233' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 07:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-85af5441f0aa4568432479eb853947e91e945e2554121e21ed42dd37cf9da731-merged.mount: Deactivated successfully.
Jan 31 07:59:50 compute-0 podman[94672]: 2026-01-31 07:59:50.41680803 +0000 UTC m=+0.810275210 container died 15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60 (image=quay.io/ceph/ceph:v20, name=elastic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:50 compute-0 podman[94697]: 2026-01-31 07:59:50.448823958 +0000 UTC m=+0.636120623 container remove 4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:59:50 compute-0 systemd[1]: libpod-conmon-4216d19dff26d17f1fb575210cf06e40fb622cf7a749f6eee626d3832f339318.scope: Deactivated successfully.
Jan 31 07:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a7a292df450860c53162134834c6062b8d31f83aa80d93e9ae66caf6addb2ae-merged.mount: Deactivated successfully.
Jan 31 07:59:50 compute-0 podman[94672]: 2026-01-31 07:59:50.486047818 +0000 UTC m=+0.879514988 container remove 15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60 (image=quay.io/ceph/ceph:v20, name=elastic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:50 compute-0 systemd[1]: libpod-conmon-15c9fbb09e856d97b494d871bca8829796b7ef9c7a71ba252242258912f03b60.scope: Deactivated successfully.
Jan 31 07:59:50 compute-0 sudo[94655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:50 compute-0 sudo[94575]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:50 compute-0 sudo[94786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:50 compute-0 sudo[94786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:50 compute-0 sudo[94786]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:50 compute-0 sudo[94811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 07:59:50 compute-0 sudo[94811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:50 compute-0 sudo[94859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tawxpcpltabhcydaelyyvohnbwywdqyh ; /usr/bin/python3'
Jan 31 07:59:50 compute-0 sudo[94859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_07:59:50
Jan 31 07:59:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:59:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 07:59:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', '.mgr']
Jan 31 07:59:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 07:59:50 compute-0 python3[94861]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v117: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:50 compute-0 podman[94862]: 2026-01-31 07:59:50.821259444 +0000 UTC m=+0.043593123 container create da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9 (image=quay.io/ceph/ceph:v20, name=distracted_cohen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:50 compute-0 systemd[1]: Started libpod-conmon-da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9.scope.
Jan 31 07:59:50 compute-0 podman[94886]: 2026-01-31 07:59:50.882553968 +0000 UTC m=+0.042994048 container create f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 07:59:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fbeb06d79072ebf94582899d4e8cb3f01cbe34366f4b97c6908feb537cfc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fbeb06d79072ebf94582899d4e8cb3f01cbe34366f4b97c6908feb537cfc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:50 compute-0 systemd[1]: Started libpod-conmon-f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9.scope.
Jan 31 07:59:50 compute-0 podman[94862]: 2026-01-31 07:59:50.80412028 +0000 UTC m=+0.026453989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:50 compute-0 podman[94862]: 2026-01-31 07:59:50.91577151 +0000 UTC m=+0.138105209 container init da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9 (image=quay.io/ceph/ceph:v20, name=distracted_cohen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:50 compute-0 podman[94862]: 2026-01-31 07:59:50.922916844 +0000 UTC m=+0.145250523 container start da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9 (image=quay.io/ceph/ceph:v20, name=distracted_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:59:50 compute-0 podman[94886]: 2026-01-31 07:59:50.92425146 +0000 UTC m=+0.084691560 container init f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_satoshi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:50 compute-0 podman[94886]: 2026-01-31 07:59:50.931178178 +0000 UTC m=+0.091618268 container start f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 07:59:50 compute-0 podman[94862]: 2026-01-31 07:59:50.932816702 +0000 UTC m=+0.155150411 container attach da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9 (image=quay.io/ceph/ceph:v20, name=distracted_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:50 compute-0 kind_satoshi[94907]: 167 167
Jan 31 07:59:50 compute-0 systemd[1]: libpod-f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9.scope: Deactivated successfully.
Jan 31 07:59:50 compute-0 podman[94886]: 2026-01-31 07:59:50.936975095 +0000 UTC m=+0.097415185 container attach f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:50 compute-0 podman[94886]: 2026-01-31 07:59:50.937581801 +0000 UTC m=+0.098021881 container died f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:59:50 compute-0 podman[94886]: 2026-01-31 07:59:50.8642015 +0000 UTC m=+0.024641600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f8f2fecd402dd45c332535451d3ef065b65d55c46d98fffaf25f354b3f332a2-merged.mount: Deactivated successfully.
Jan 31 07:59:50 compute-0 podman[94886]: 2026-01-31 07:59:50.980195508 +0000 UTC m=+0.140635588 container remove f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 07:59:50 compute-0 systemd[1]: libpod-conmon-f84c5064845ad0b301942412c2aa7d6274b9c3a2ad34de6355946c235128c0d9.scope: Deactivated successfully.
Jan 31 07:59:51 compute-0 podman[94951]: 2026-01-31 07:59:51.098847668 +0000 UTC m=+0.036324077 container create 187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jemison, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:51 compute-0 systemd[1]: Started libpod-conmon-187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868.scope.
Jan 31 07:59:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e1f02d021442b62fb0e1c402b9c2e6a4928553afee38caefd7b62e7991b7f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e1f02d021442b62fb0e1c402b9c2e6a4928553afee38caefd7b62e7991b7f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e1f02d021442b62fb0e1c402b9c2e6a4928553afee38caefd7b62e7991b7f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e1f02d021442b62fb0e1c402b9c2e6a4928553afee38caefd7b62e7991b7f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:51 compute-0 podman[94951]: 2026-01-31 07:59:51.080909531 +0000 UTC m=+0.018385860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:51 compute-0 podman[94951]: 2026-01-31 07:59:51.190817563 +0000 UTC m=+0.128293862 container init 187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:59:51 compute-0 podman[94951]: 2026-01-31 07:59:51.19730622 +0000 UTC m=+0.134782559 container start 187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:59:51 compute-0 podman[94951]: 2026-01-31 07:59:51.205670617 +0000 UTC m=+0.143146946 container attach 187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jemison, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 07:59:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 07:59:51 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627716921' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 07:59:51 compute-0 distracted_cohen[94902]: 
Jan 31 07:59:51 compute-0 distracted_cohen[94902]: {"epoch":1,"fsid":"dc03f344-536f-5591-add9-31059f42637c","modified":"2026-01-31T07:56:20.975396Z","created":"2026-01-31T07:56:20.975396Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 31 07:59:51 compute-0 distracted_cohen[94902]: dumped monmap epoch 1
Jan 31 07:59:51 compute-0 systemd[1]: libpod-da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9.scope: Deactivated successfully.
Jan 31 07:59:51 compute-0 podman[94862]: 2026-01-31 07:59:51.416627522 +0000 UTC m=+0.638961201 container died da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9 (image=quay.io/ceph/ceph:v20, name=distracted_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:59:51 compute-0 ceph-mon[75294]: pgmap v117: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:51 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2627716921' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 07:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a34fbeb06d79072ebf94582899d4e8cb3f01cbe34366f4b97c6908feb537cfc0-merged.mount: Deactivated successfully.
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]: {
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:     "0": [
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:         {
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "devices": [
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "/dev/loop3"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             ],
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_name": "ceph_lv0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_size": "21470642176",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "name": "ceph_lv0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "tags": {
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cluster_name": "ceph",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.crush_device_class": "",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.encrypted": "0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.objectstore": "bluestore",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osd_id": "0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.type": "block",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.vdo": "0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.with_tpm": "0"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             },
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "type": "block",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "vg_name": "ceph_vg0"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:         }
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:     ],
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:     "1": [
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:         {
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "devices": [
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "/dev/loop4"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             ],
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_name": "ceph_lv1",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_size": "21470642176",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "name": "ceph_lv1",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "tags": {
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cluster_name": "ceph",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.crush_device_class": "",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.encrypted": "0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.objectstore": "bluestore",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osd_id": "1",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.type": "block",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.vdo": "0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.with_tpm": "0"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             },
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "type": "block",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "vg_name": "ceph_vg1"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:         }
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:     ],
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:     "2": [
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:         {
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "devices": [
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "/dev/loop5"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             ],
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_name": "ceph_lv2",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_size": "21470642176",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "name": "ceph_lv2",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "tags": {
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.cluster_name": "ceph",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.crush_device_class": "",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.encrypted": "0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.objectstore": "bluestore",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osd_id": "2",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.type": "block",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.vdo": "0",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:                 "ceph.with_tpm": "0"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             },
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "type": "block",
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:             "vg_name": "ceph_vg2"
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:         }
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]:     ]
Jan 31 07:59:51 compute-0 dreamy_jemison[94969]: }
Jan 31 07:59:51 compute-0 podman[94862]: 2026-01-31 07:59:51.489380426 +0000 UTC m=+0.711714105 container remove da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9 (image=quay.io/ceph/ceph:v20, name=distracted_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:51 compute-0 systemd[1]: libpod-187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868.scope: Deactivated successfully.
Jan 31 07:59:51 compute-0 sudo[94859]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:51 compute-0 systemd[1]: libpod-conmon-da3eb031d038680574870d6c7807197413e096c99f2458654cfae163cde793e9.scope: Deactivated successfully.
Jan 31 07:59:51 compute-0 podman[94989]: 2026-01-31 07:59:51.541072148 +0000 UTC m=+0.019890621 container died 187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 07:59:51 compute-0 podman[94989]: 2026-01-31 07:59:51.722012719 +0000 UTC m=+0.200831172 container remove 187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:59:51 compute-0 systemd[1]: libpod-conmon-187e8ae0767c288a06b0f633cd04d44a6ad6bee528bb5ef990348d3048042868.scope: Deactivated successfully.
Jan 31 07:59:51 compute-0 sudo[94811]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:51 compute-0 sudo[95005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:51 compute-0 sudo[95005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:51 compute-0 sudo[95005]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3e1f02d021442b62fb0e1c402b9c2e6a4928553afee38caefd7b62e7991b7f8-merged.mount: Deactivated successfully.
Jan 31 07:59:51 compute-0 sudo[95030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 07:59:51 compute-0 sudo[95030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:51 compute-0 sudo[95078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhwzthsjwxrrxcwpcqzirvlssfptdxp ; /usr/bin/python3'
Jan 31 07:59:51 compute-0 sudo[95078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:52 compute-0 python3[95080]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:52 compute-0 podman[95095]: 2026-01-31 07:59:52.110365667 +0000 UTC m=+0.051325374 container create 457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:59:52 compute-0 systemd[1]: Started libpod-conmon-457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4.scope.
Jan 31 07:59:52 compute-0 podman[95094]: 2026-01-31 07:59:52.146176039 +0000 UTC m=+0.086405956 container create 0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d (image=quay.io/ceph/ceph:v20, name=serene_heisenberg, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 07:59:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:52 compute-0 systemd[1]: Started libpod-conmon-0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d.scope.
Jan 31 07:59:52 compute-0 podman[95095]: 2026-01-31 07:59:52.079394407 +0000 UTC m=+0.020354124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:52 compute-0 podman[95095]: 2026-01-31 07:59:52.181352464 +0000 UTC m=+0.122312201 container init 457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6f1d08d457613cc3ff18912bbc39b0815df6167ecf0e3f646fd0ac25a35528/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6f1d08d457613cc3ff18912bbc39b0815df6167ecf0e3f646fd0ac25a35528/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:52 compute-0 podman[95095]: 2026-01-31 07:59:52.186899234 +0000 UTC m=+0.127858941 container start 457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 07:59:52 compute-0 podman[95094]: 2026-01-31 07:59:52.092313797 +0000 UTC m=+0.032543804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:52 compute-0 podman[95095]: 2026-01-31 07:59:52.190098251 +0000 UTC m=+0.131057958 container attach 457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:59:52 compute-0 unruffled_rubin[95123]: 167 167
Jan 31 07:59:52 compute-0 systemd[1]: libpod-457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4.scope: Deactivated successfully.
Jan 31 07:59:52 compute-0 podman[95094]: 2026-01-31 07:59:52.195030145 +0000 UTC m=+0.135260092 container init 0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d (image=quay.io/ceph/ceph:v20, name=serene_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:59:52 compute-0 podman[95095]: 2026-01-31 07:59:52.195095597 +0000 UTC m=+0.136055304 container died 457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:59:52 compute-0 podman[95094]: 2026-01-31 07:59:52.198990332 +0000 UTC m=+0.139220249 container start 0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d (image=quay.io/ceph/ceph:v20, name=serene_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:52 compute-0 podman[95094]: 2026-01-31 07:59:52.206097345 +0000 UTC m=+0.146327262 container attach 0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d (image=quay.io/ceph/ceph:v20, name=serene_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e3d9e1a570cc109f182852f76cea9497767e83a9368f46549cd5db0e3a4b18-merged.mount: Deactivated successfully.
Jan 31 07:59:52 compute-0 podman[95095]: 2026-01-31 07:59:52.233477858 +0000 UTC m=+0.174437565 container remove 457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:59:52 compute-0 systemd[1]: libpod-conmon-457ae30714c5d91a2a296353479f6a0a2464ce2ebc89adf4ef4b6b88028e6fc4.scope: Deactivated successfully.
Jan 31 07:59:52 compute-0 podman[95171]: 2026-01-31 07:59:52.341164901 +0000 UTC m=+0.038082365 container create 2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:52 compute-0 systemd[1]: Started libpod-conmon-2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9.scope.
Jan 31 07:59:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a9e4a4f820f1cc283b5b9b344736ce5a1556331e644cd7dcf093425c569bfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a9e4a4f820f1cc283b5b9b344736ce5a1556331e644cd7dcf093425c569bfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a9e4a4f820f1cc283b5b9b344736ce5a1556331e644cd7dcf093425c569bfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:52 compute-0 podman[95171]: 2026-01-31 07:59:52.321290941 +0000 UTC m=+0.018208445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a9e4a4f820f1cc283b5b9b344736ce5a1556331e644cd7dcf093425c569bfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:52 compute-0 podman[95171]: 2026-01-31 07:59:52.444998058 +0000 UTC m=+0.141915532 container init 2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_allen, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:52 compute-0 podman[95171]: 2026-01-31 07:59:52.451176705 +0000 UTC m=+0.148094169 container start 2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_allen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:59:52 compute-0 podman[95171]: 2026-01-31 07:59:52.4874652 +0000 UTC m=+0.184382694 container attach 2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 31 07:59:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3914419956' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 07:59:52 compute-0 serene_heisenberg[95129]: [client.openstack]
Jan 31 07:59:52 compute-0 serene_heisenberg[95129]:         key = AQBmtX1pAAAAABAAGlx/43NfN+tI0V7rwdqN7g==
Jan 31 07:59:52 compute-0 serene_heisenberg[95129]:         caps mgr = "allow *"
Jan 31 07:59:52 compute-0 serene_heisenberg[95129]:         caps mon = "profile rbd"
Jan 31 07:59:52 compute-0 serene_heisenberg[95129]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 07:59:52 compute-0 systemd[1]: libpod-0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d.scope: Deactivated successfully.
Jan 31 07:59:52 compute-0 podman[95094]: 2026-01-31 07:59:52.736719644 +0000 UTC m=+0.676949571 container died 0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d (image=quay.io/ceph/ceph:v20, name=serene_heisenberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v118: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:52 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3914419956' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 07:59:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff6f1d08d457613cc3ff18912bbc39b0815df6167ecf0e3f646fd0ac25a35528-merged.mount: Deactivated successfully.
Jan 31 07:59:53 compute-0 podman[95094]: 2026-01-31 07:59:53.052697579 +0000 UTC m=+0.992927536 container remove 0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d (image=quay.io/ceph/ceph:v20, name=serene_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:53 compute-0 systemd[1]: libpod-conmon-0a8cdb353db45a3f9109227484d8cdb0aa769d125434f6ed024753b5ec3c9a2d.scope: Deactivated successfully.
Jan 31 07:59:53 compute-0 lvm[95278]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:59:53 compute-0 lvm[95281]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 07:59:53 compute-0 lvm[95278]: VG ceph_vg0 finished
Jan 31 07:59:53 compute-0 lvm[95281]: VG ceph_vg1 finished
Jan 31 07:59:53 compute-0 sudo[95078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:53 compute-0 lvm[95283]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 07:59:53 compute-0 lvm[95283]: VG ceph_vg2 finished
Jan 31 07:59:53 compute-0 inspiring_allen[95188]: {}
Jan 31 07:59:53 compute-0 systemd[1]: libpod-2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9.scope: Deactivated successfully.
Jan 31 07:59:53 compute-0 systemd[1]: libpod-2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9.scope: Consumed 1.040s CPU time.
Jan 31 07:59:53 compute-0 podman[95171]: 2026-01-31 07:59:53.205865076 +0000 UTC m=+0.902782540 container died 2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_allen, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-53a9e4a4f820f1cc283b5b9b344736ce5a1556331e644cd7dcf093425c569bfb-merged.mount: Deactivated successfully.
Jan 31 07:59:53 compute-0 podman[95171]: 2026-01-31 07:59:53.704775064 +0000 UTC m=+1.401692528 container remove 2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:59:53 compute-0 sudo[95030]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:59:53 compute-0 systemd[1]: libpod-conmon-2fa734b869a49b40542ebc2fc185d706df9db602a077f1bdcc17a3c98d24b7f9.scope: Deactivated successfully.
Jan 31 07:59:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:59:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:53 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev 9ae8ffb3-c172-44cc-841d-b8f043c3ab1b (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 07:59:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ockecq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 31 07:59:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ockecq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 07:59:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ockecq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:59:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 31 07:59:54 compute-0 ceph-mon[75294]: pgmap v118: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ockecq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 07:59:54 compute-0 sudo[95445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txptwdjzxofhutyizbearsopygcrwbod ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846394.1704543-37079-252319059967804/async_wrapper.py j47742963273 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846394.1704543-37079-252319059967804/AnsiballZ_command.py _'
Jan 31 07:59:54 compute-0 sudo[95445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:59:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ockecq on compute-0
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ockecq on compute-0
Jan 31 07:59:54 compute-0 sudo[95448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:54 compute-0 sudo[95448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:59:54 compute-0 sudo[95448]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:54 compute-0 sudo[95473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:59:54 compute-0 sudo[95473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:54 compute-0 ansible-async_wrapper.py[95447]: Invoked with j47742963273 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846394.1704543-37079-252319059967804/AnsiballZ_command.py _
Jan 31 07:59:54 compute-0 ansible-async_wrapper.py[95500]: Starting module and watcher
Jan 31 07:59:54 compute-0 ansible-async_wrapper.py[95500]: Start watching 95501 (30)
Jan 31 07:59:54 compute-0 ansible-async_wrapper.py[95501]: Start module (95501)
Jan 31 07:59:54 compute-0 ansible-async_wrapper.py[95447]: Return async_wrapper task started.
Jan 31 07:59:54 compute-0 sudo[95445]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:59:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 31 07:59:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v119: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:54 compute-0 python3[95502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:54 compute-0 podman[95503]: 2026-01-31 07:59:54.819738072 +0000 UTC m=+0.030838498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:54 compute-0 podman[95503]: 2026-01-31 07:59:54.920440804 +0000 UTC m=+0.131541210 container create d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d (image=quay.io/ceph/ceph:v20, name=suspicious_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:54 compute-0 systemd[1]: Started libpod-conmon-d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d.scope.
Jan 31 07:59:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec31a29b8fba7efa4cfb60942a175201ae41713f35a7b5fb4326fb37ca1f520/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec31a29b8fba7efa4cfb60942a175201ae41713f35a7b5fb4326fb37ca1f520/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:55 compute-0 podman[95503]: 2026-01-31 07:59:55.200355611 +0000 UTC m=+0.411456117 container init d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d (image=quay.io/ceph/ceph:v20, name=suspicious_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 07:59:55 compute-0 podman[95503]: 2026-01-31 07:59:55.208327347 +0000 UTC m=+0.419427743 container start d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d (image=quay.io/ceph/ceph:v20, name=suspicious_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 07:59:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ockecq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:59:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:55 compute-0 ceph-mon[75294]: Deploying daemon rgw.rgw.compute-0.ockecq on compute-0
Jan 31 07:59:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:55 compute-0 ceph-mon[75294]: pgmap v119: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:55 compute-0 podman[95503]: 2026-01-31 07:59:55.327173103 +0000 UTC m=+0.538273499 container attach d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d (image=quay.io/ceph/ceph:v20, name=suspicious_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:59:55 compute-0 podman[95581]: 2026-01-31 07:59:55.458712322 +0000 UTC m=+0.017300561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:55 compute-0 podman[95581]: 2026-01-31 07:59:55.650821395 +0000 UTC m=+0.209409604 container create d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 07:59:55 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:59:55 compute-0 suspicious_cannon[95533]: 
Jan 31 07:59:55 compute-0 suspicious_cannon[95533]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:59:55 compute-0 systemd[1]: libpod-d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d.scope: Deactivated successfully.
Jan 31 07:59:55 compute-0 sudo[95643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfohtcixxfxaiqkscwqhpyfexoewiuii ; /usr/bin/python3'
Jan 31 07:59:55 compute-0 sudo[95643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:55 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:59:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 07:59:55 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 31 07:59:55 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev 7491a528-74a0-485b-a940-0bf22e7d6df5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 07:59:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 31 07:59:55 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:55 compute-0 systemd[1]: Started libpod-conmon-d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09.scope.
Jan 31 07:59:55 compute-0 podman[95503]: 2026-01-31 07:59:55.855813496 +0000 UTC m=+1.066913902 container died d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d (image=quay.io/ceph/ceph:v20, name=suspicious_cannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 07:59:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:55 compute-0 python3[95646]: ansible-ansible.legacy.async_status Invoked with jid=j47742963273.95447 mode=status _async_dir=/root/.ansible_async
Jan 31 07:59:55 compute-0 sudo[95643]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:55 compute-0 podman[95581]: 2026-01-31 07:59:55.994472949 +0000 UTC m=+0.553061158 container init d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 07:59:55 compute-0 podman[95581]: 2026-01-31 07:59:55.998708714 +0000 UTC m=+0.557296933 container start d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shtern, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 07:59:56 compute-0 intelligent_shtern[95662]: 167 167
Jan 31 07:59:56 compute-0 systemd[1]: libpod-d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09.scope: Deactivated successfully.
Jan 31 07:59:56 compute-0 podman[95581]: 2026-01-31 07:59:56.090963958 +0000 UTC m=+0.649552177 container attach d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shtern, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:59:56 compute-0 podman[95581]: 2026-01-31 07:59:56.091619776 +0000 UTC m=+0.650207995 container died d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shtern, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 07:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4424981bc64ffc37b37ef317ee5c47bffd45108f7be0e127a26e548145ce79db-merged.mount: Deactivated successfully.
Jan 31 07:59:56 compute-0 podman[95581]: 2026-01-31 07:59:56.405120042 +0000 UTC m=+0.963708261 container remove d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_shtern, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:59:56 compute-0 systemd[1]: libpod-conmon-d317149575580cae1940067f6675595df1e3086e273a01b43a893f8a41ef5e09.scope: Deactivated successfully.
Jan 31 07:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cec31a29b8fba7efa4cfb60942a175201ae41713f35a7b5fb4326fb37ca1f520-merged.mount: Deactivated successfully.
Jan 31 07:59:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v121: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 07:59:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 07:59:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 07:59:56 compute-0 podman[95503]: 2026-01-31 07:59:56.988977046 +0000 UTC m=+2.200077452 container remove d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d (image=quay.io/ceph/ceph:v20, name=suspicious_cannon, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:56 compute-0 systemd[1]: Reloading.
Jan 31 07:59:57 compute-0 ansible-async_wrapper.py[95501]: Module complete (95501)
Jan 31 07:59:57 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:59:57 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:59:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 07:59:57 compute-0 systemd-rc-local-generator[95753]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:59:57 compute-0 systemd-sysv-generator[95758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:59:57 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 31 07:59:57 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev fd98cbcc-cdd6-41a0-8878-4df7c415b88c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 07:59:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 31 07:59:57 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:57 compute-0 ceph-mon[75294]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:59:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:59:57 compute-0 ceph-mon[75294]: osdmap e37: 3 total, 3 up, 3 in
Jan 31 07:59:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 07:59:57 compute-0 systemd[1]: libpod-conmon-d7a9cbfad7b7d1c3332165034108866f7042eec0a5a83147479eedef9598b85d.scope: Deactivated successfully.
Jan 31 07:59:57 compute-0 sudo[95728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvdxbbfgdznpchvkuktattzotjzdlbio ; /usr/bin/python3'
Jan 31 07:59:57 compute-0 sudo[95728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:57 compute-0 systemd[1]: Reloading.
Jan 31 07:59:57 compute-0 systemd-rc-local-generator[95792]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:59:57 compute-0 systemd-sysv-generator[95797]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:59:57 compute-0 python3[95766]: ansible-ansible.legacy.async_status Invoked with jid=j47742963273.95447 mode=status _async_dir=/root/.ansible_async
Jan 31 07:59:57 compute-0 sudo[95728]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:57 compute-0 sudo[95851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glokwspyflgtnwwmjnikbnyjkwoxxnjn ; /usr/bin/python3'
Jan 31 07:59:57 compute-0 sudo[95851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:57 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.ockecq for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 07:59:57 compute-0 python3[95855]: ansible-ansible.legacy.async_status Invoked with jid=j47742963273.95447 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 07:59:57 compute-0 sudo[95851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:57 compute-0 podman[95901]: 2026-01-31 07:59:57.62590711 +0000 UTC m=+0.017292180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 07:59:57 compute-0 podman[95901]: 2026-01-31 07:59:57.802979426 +0000 UTC m=+0.194364556 container create 3da15e103d53ff31a443f16903db04ebea1f38efe24bb0a69e3f6a2d5bca4e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-rgw-rgw-compute-0-ockecq, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e291e9c368f60b0f3086892facc56f92534495d19bbaa51173f8d802bfeb7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e291e9c368f60b0f3086892facc56f92534495d19bbaa51173f8d802bfeb7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e291e9c368f60b0f3086892facc56f92534495d19bbaa51173f8d802bfeb7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e291e9c368f60b0f3086892facc56f92534495d19bbaa51173f8d802bfeb7b/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ockecq supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:57 compute-0 podman[95901]: 2026-01-31 07:59:57.99106564 +0000 UTC m=+0.382450660 container init 3da15e103d53ff31a443f16903db04ebea1f38efe24bb0a69e3f6a2d5bca4e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-rgw-rgw-compute-0-ockecq, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:57 compute-0 podman[95901]: 2026-01-31 07:59:57.997815723 +0000 UTC m=+0.389200723 container start 3da15e103d53ff31a443f16903db04ebea1f38efe24bb0a69e3f6a2d5bca4e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-rgw-rgw-compute-0-ockecq, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 07:59:58 compute-0 sudo[95953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmaeomcjzqbbkqkyuaztieiwqyfzdpl ; /usr/bin/python3'
Jan 31 07:59:58 compute-0 bash[95901]: 3da15e103d53ff31a443f16903db04ebea1f38efe24bb0a69e3f6a2d5bca4e19
Jan 31 07:59:58 compute-0 sudo[95953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:59:58 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.ockecq for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 07:59:58 compute-0 radosgw[95921]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:59:58 compute-0 radosgw[95921]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 31 07:59:58 compute-0 radosgw[95921]: framework: beast
Jan 31 07:59:58 compute-0 radosgw[95921]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 07:59:58 compute-0 radosgw[95921]: init_numa not setting numa affinity
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 31 07:59:58 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev 358d3c16-29b8-413a-99d8-9f59a11a5687 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:58 compute-0 sudo[95473]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 07:59:58 compute-0 python3[95955]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:59:58 compute-0 ceph-mon[75294]: pgmap v121: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:59:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:59:58 compute-0 ceph-mon[75294]: osdmap e38: 3 total, 3 up, 3 in
Jan 31 07:59:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:59:58 compute-0 ceph-mon[75294]: osdmap e39: 3 total, 3 up, 3 in
Jan 31 07:59:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 07:59:58 compute-0 podman[95976]: 2026-01-31 07:59:58.312354959 +0000 UTC m=+0.083998021 container create c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9 (image=quay.io/ceph/ceph:v20, name=elegant_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 07:59:58 compute-0 podman[95976]: 2026-01-31 07:59:58.251082556 +0000 UTC m=+0.022725638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 07:59:58 compute-0 systemd[1]: Started libpod-conmon-c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9.scope.
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:58 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev 9ae8ffb3-c172-44cc-841d-b8f043c3ab1b (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 07:59:58 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event 9ae8ffb3-c172-44cc-841d-b8f043c3ab1b (Updating rgw.rgw deployment (+1 -> 1)) in 5 seconds
Jan 31 07:59:58 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 31 07:59:58 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 07:59:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58342cd2294eaa705ced17911129aa5886c3527325ddc850af6a01870aa7eeb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58342cd2294eaa705ced17911129aa5886c3527325ddc850af6a01870aa7eeb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v124: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 07:59:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 07:59:58 compute-0 podman[95976]: 2026-01-31 07:59:58.874212275 +0000 UTC m=+0.645855357 container init c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9 (image=quay.io/ceph/ceph:v20, name=elegant_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 07:59:58 compute-0 podman[95976]: 2026-01-31 07:59:58.882876831 +0000 UTC m=+0.654519893 container start c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9 (image=quay.io/ceph/ceph:v20, name=elegant_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:59 compute-0 podman[95976]: 2026-01-31 07:59:59.016123456 +0000 UTC m=+0.787766538 container attach c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9 (image=quay.io/ceph/ceph:v20, name=elegant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 07:59:59 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev bb2cf1a1-1b57-4519-88c7-c1f7a73b12a9 (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 07:59:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xdvglw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xdvglw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 07:59:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xdvglw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:59:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 07:59:59 compute-0 ceph-mgr[75591]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.xdvglw on compute-0
Jan 31 07:59:59 compute-0 ceph-mgr[75591]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.xdvglw on compute-0
Jan 31 07:59:59 compute-0 sudo[96015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:59 compute-0 sudo[96015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:59 compute-0 sudo[96015]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:59 compute-0 sudo[96040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dc03f344-536f-5591-add9-31059f42637c
Jan 31 07:59:59 compute-0 sudo[96040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:59 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14255 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:59:59 compute-0 elegant_chandrasekhar[95992]: 
Jan 31 07:59:59 compute-0 elegant_chandrasekhar[95992]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:59:59 compute-0 systemd[1]: libpod-c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9.scope: Deactivated successfully.
Jan 31 07:59:59 compute-0 podman[95976]: 2026-01-31 07:59:59.338731981 +0000 UTC m=+1.110375043 container died c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9 (image=quay.io/ceph/ceph:v20, name=elegant_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:59:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 07:59:59 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38 pruub=12.581000328s) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active pruub 101.880897522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38 pruub=12.581000328s) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown pruub 101.880897522s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.1( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.1a( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.1e( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.c( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.e( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.10( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.14( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.12( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:59:59 compute-0 ansible-async_wrapper.py[95500]: Done in kid B.
Jan 31 08:00:00 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev e922d658-e4ba-4e14-ad67-7864377cba7e (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1638944089' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 08:00:00 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 40 pg[8.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:00 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=20/22 n=0 ec=20/20 lis/c=20/20 les/c/f=22/22/0 sis=40 pruub=14.020360947s) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active pruub 108.510963440s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:00 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=20/22 n=0 ec=20/20 lis/c=20/20 les/c/f=22/22/0 sis=40 pruub=14.020360947s) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown pruub 108.510963440s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:00 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 4 completed events
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:00 compute-0 ceph-mon[75294]: Saving service rgw.rgw spec with placement compute-0
Jan 31 08:00:00 compute-0 ceph-mon[75294]: pgmap v124: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xdvglw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xdvglw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 08:00:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-58342cd2294eaa705ced17911129aa5886c3527325ddc850af6a01870aa7eeb7-merged.mount: Deactivated successfully.
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1638944089' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 08:00:00 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=14.508916855s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active pruub 113.263633728s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:00 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=14.508916855s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown pruub 113.263633728s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v126: 101 pgs: 94 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 08:00:00 compute-0 ceph-mgr[75591]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Jan 31 08:00:00 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev 412c6b1e-f0ff-46f9-b965-8ef340f75125 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:00:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:00:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=20/22 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 podman[95976]: 2026-01-31 08:00:01.103313927 +0000 UTC m=+2.874956989 container remove c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9 (image=quay.io/ceph/ceph:v20, name=elegant_chandrasekhar, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:00:01 compute-0 sudo[95953]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 systemd[1]: libpod-conmon-c2e1ca1fb5903566596c8e47848d8b384fcd510a5a3face1bc9a01c87a66dbe9.scope: Deactivated successfully.
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=38/41 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2] r=0 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[8.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=40/41 n=0 ec=20/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=20/20 les/c/f=22/22/0 sis=40) [1] r=0 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:01 compute-0 podman[96118]: 2026-01-31 08:00:01.262949928 +0000 UTC m=+0.015419739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:01 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 08:00:01 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 08:00:01 compute-0 podman[96118]: 2026-01-31 08:00:01.599513221 +0000 UTC m=+0.351983052 container create 4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:00:01 compute-0 ceph-mon[75294]: Deploying daemon mds.cephfs.compute-0.xdvglw on compute-0
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='client.14255 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:00:01 compute-0 ceph-mon[75294]: osdmap e40: 3 total, 3 up, 3 in
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1638944089' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1638944089' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 08:00:01 compute-0 ceph-mon[75294]: pgmap v126: 101 pgs: 94 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:01 compute-0 ceph-mon[75294]: osdmap e41: 3 total, 3 up, 3 in
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:00:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:00:01 compute-0 systemd[1]: Started libpod-conmon-4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4.scope.
Jan 31 08:00:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:01 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 08:00:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 08:00:01 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 08:00:01 compute-0 podman[96118]: 2026-01-31 08:00:01.824513987 +0000 UTC m=+0.576983778 container init 4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:00:01 compute-0 podman[96118]: 2026-01-31 08:00:01.830917222 +0000 UTC m=+0.583387013 container start 4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shannon, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:00:01 compute-0 distracted_shannon[96134]: 167 167
Jan 31 08:00:01 compute-0 systemd[1]: libpod-4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4.scope: Deactivated successfully.
Jan 31 08:00:01 compute-0 sudo[96164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qytahbtrocakiagkgvizosmoznpulark ; /usr/bin/python3'
Jan 31 08:00:01 compute-0 sudo[96164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:01 compute-0 podman[96118]: 2026-01-31 08:00:01.945570903 +0000 UTC m=+0.698040714 container attach 4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:00:01 compute-0 podman[96118]: 2026-01-31 08:00:01.946328963 +0000 UTC m=+0.698798764 container died 4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:00:01 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:00:01 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:00:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 08:00:01 compute-0 python3[96177]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:02 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev caedb6a3-d93f-45dc-b47c-642f1e5091b1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev 7491a528-74a0-485b-a940-0bf22e7d6df5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event 7491a528-74a0-485b-a940-0bf22e7d6df5 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev fd98cbcc-cdd6-41a0-8878-4df7c415b88c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event fd98cbcc-cdd6-41a0-8878-4df7c415b88c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev 358d3c16-29b8-413a-99d8-9f59a11a5687 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event 358d3c16-29b8-413a-99d8-9f59a11a5687 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 4 seconds
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev e922d658-e4ba-4e14-ad67-7864377cba7e (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event e922d658-e4ba-4e14-ad67-7864377cba7e (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev 412c6b1e-f0ff-46f9-b965-8ef340f75125 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event 412c6b1e-f0ff-46f9-b965-8ef340f75125 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev caedb6a3-d93f-45dc-b47c-642f1e5091b1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event caedb6a3-d93f-45dc-b47c-642f1e5091b1 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=40/42 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9502fb4b3eeefd5ea143fbb6a7127e09da7a12d2764a8d26718f66d899132eaa-merged.mount: Deactivated successfully.
Jan 31 08:00:02 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 31 08:00:02 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 31 08:00:02 compute-0 podman[96118]: 2026-01-31 08:00:02.762265165 +0000 UTC m=+1.514734956 container remove 4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:00:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v129: 132 pgs: 93 unknown, 39 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Jan 31 08:00:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:00:02 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:00:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 31 08:00:02 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 08:00:02 compute-0 systemd[1]: libpod-conmon-4a591b43d1d007bbdfe80241b89b4d773b66297cfaf31d521a767db8fcd015c4.scope: Deactivated successfully.
Jan 31 08:00:02 compute-0 podman[96584]: 2026-01-31 08:00:02.919272475 +0000 UTC m=+0.916122561 container create 3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d (image=quay.io/ceph/ceph:v20, name=thirsty_kare, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:02 compute-0 ceph-mon[75294]: 2.1f scrub starts
Jan 31 08:00:02 compute-0 ceph-mon[75294]: 2.1f scrub ok
Jan 31 08:00:02 compute-0 ceph-mon[75294]: 3.1c scrub starts
Jan 31 08:00:02 compute-0 ceph-mon[75294]: 3.1c scrub ok
Jan 31 08:00:02 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:00:02 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:00:02 compute-0 ceph-mon[75294]: osdmap e42: 3 total, 3 up, 3 in
Jan 31 08:00:02 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:00:02 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 08:00:02 compute-0 podman[96584]: 2026-01-31 08:00:02.857083908 +0000 UTC m=+0.853934034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:00:02 compute-0 systemd[1]: Started libpod-conmon-3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d.scope.
Jan 31 08:00:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66833789e4241d9cf2698739a70f56e857fa63e23203060e72151b10f2aadaa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66833789e4241d9cf2698739a70f56e857fa63e23203060e72151b10f2aadaa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 08:00:03 compute-0 podman[96584]: 2026-01-31 08:00:03.232734882 +0000 UTC m=+1.229584958 container init 3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d (image=quay.io/ceph/ceph:v20, name=thirsty_kare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:00:03 compute-0 podman[96584]: 2026-01-31 08:00:03.237207343 +0000 UTC m=+1.234057419 container start 3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d (image=quay.io/ceph/ceph:v20, name=thirsty_kare, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:00:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:00:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 08:00:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 08:00:03 compute-0 systemd[1]: Reloading.
Jan 31 08:00:03 compute-0 systemd-rc-local-generator[96805]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:00:03 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 31 08:00:03 compute-0 systemd-sysv-generator[96811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:00:03 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 31 08:00:03 compute-0 podman[96584]: 2026-01-31 08:00:03.581017924 +0000 UTC m=+1.577868030 container attach 3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d (image=quay.io/ceph/ceph:v20, name=thirsty_kare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 08:00:03 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 08:00:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 31 08:00:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 08:00:03 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:00:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} v 0)
Jan 31 08:00:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:00:03 compute-0 thirsty_kare[96756]: 
Jan 31 08:00:03 compute-0 thirsty_kare[96756]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 31 08:00:03 compute-0 podman[96584]: 2026-01-31 08:00:03.657804537 +0000 UTC m=+1.654654623 container died 3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d (image=quay.io/ceph/ceph:v20, name=thirsty_kare, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:03 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 31 08:00:03 compute-0 systemd[1]: libpod-3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d.scope: Deactivated successfully.
Jan 31 08:00:03 compute-0 systemd[1]: Reloading.
Jan 31 08:00:03 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43 pruub=8.266251564s) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active pruub 110.160461426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:03 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43 pruub=8.266251564s) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown pruub 110.160461426s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:03 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 43 pg[9.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:03 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=10.695676804s) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active pruub 108.773933411s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:03 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 31 08:00:03 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=10.695676804s) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown pruub 108.773933411s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:03 compute-0 systemd-rc-local-generator[96861]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:00:03 compute-0 systemd-sysv-generator[96864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:00:03 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=42 pruub=13.613417625s) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active pruub 107.279090881s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:03 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=42 pruub=13.613417625s) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown pruub 107.279090881s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.xdvglw for dc03f344-536f-5591-add9-31059f42637c...
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.1f( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.1c( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.10( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.17( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.8( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.a( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.b( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.6( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.d( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.e( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.1b( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=24/25 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:04 compute-0 ceph-mon[75294]: 3.1f scrub starts
Jan 31 08:00:04 compute-0 ceph-mon[75294]: 3.1f scrub ok
Jan 31 08:00:04 compute-0 ceph-mon[75294]: pgmap v129: 132 pgs: 93 unknown, 39 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Jan 31 08:00:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:00:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 08:00:04 compute-0 ceph-mon[75294]: osdmap e43: 3 total, 3 up, 3 in
Jan 31 08:00:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 08:00:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:00:04 compute-0 ceph-mon[75294]: 3.1e scrub starts
Jan 31 08:00:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 08:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66833789e4241d9cf2698739a70f56e857fa63e23203060e72151b10f2aadaa-merged.mount: Deactivated successfully.
Jan 31 08:00:04 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 08:00:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 08:00:04 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 08:00:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v132: 179 pgs: 78 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Jan 31 08:00:04 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:05 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [0] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[9.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 podman[96584]: 2026-01-31 08:00:05.203011779 +0000 UTC m=+3.199861875 container remove 3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d (image=quay.io/ceph/ceph:v20, name=thirsty_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=42/44 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=42) [2] r=0 lpr=42 pi=[24,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:05 compute-0 sudo[96164]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:05 compute-0 systemd[1]: libpod-conmon-3feeaf4ed59b02b0b1f08472012296a1ee1d536b35ff580ad27dae83b653515d.scope: Deactivated successfully.
Jan 31 08:00:05 compute-0 ceph-mon[75294]: 2.1d scrub starts
Jan 31 08:00:05 compute-0 ceph-mon[75294]: 2.1d scrub ok
Jan 31 08:00:05 compute-0 ceph-mon[75294]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:00:05 compute-0 ceph-mon[75294]: 3.1e scrub ok
Jan 31 08:00:05 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 08:00:05 compute-0 ceph-mon[75294]: osdmap e44: 3 total, 3 up, 3 in
Jan 31 08:00:05 compute-0 ceph-mon[75294]: pgmap v132: 179 pgs: 78 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Jan 31 08:00:05 compute-0 ceph-mon[75294]: 4.1f scrub starts
Jan 31 08:00:05 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 08:00:05 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 08:00:05 compute-0 podman[96922]: 2026-01-31 08:00:05.394409344 +0000 UTC m=+0.019774508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:05 compute-0 podman[96922]: 2026-01-31 08:00:05.503049721 +0000 UTC m=+0.128414855 container create 600fe0d252669b010151658572ae8f3e34ce5cf806819c48f355f71862bcc99a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mds-cephfs-compute-0-xdvglw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2b3ce0a3f940bcef1b210990a97b8bb5784379da7de5cf34aa54c4b99863d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2b3ce0a3f940bcef1b210990a97b8bb5784379da7de5cf34aa54c4b99863d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2b3ce0a3f940bcef1b210990a97b8bb5784379da7de5cf34aa54c4b99863d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2b3ce0a3f940bcef1b210990a97b8bb5784379da7de5cf34aa54c4b99863d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.xdvglw supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 08:00:05 compute-0 podman[96922]: 2026-01-31 08:00:05.732487928 +0000 UTC m=+0.357853062 container init 600fe0d252669b010151658572ae8f3e34ce5cf806819c48f355f71862bcc99a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mds-cephfs-compute-0-xdvglw, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:05 compute-0 podman[96922]: 2026-01-31 08:00:05.73921392 +0000 UTC m=+0.364579064 container start 600fe0d252669b010151658572ae8f3e34ce5cf806819c48f355f71862bcc99a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mds-cephfs-compute-0-xdvglw, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:00:05 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 10 completed events
Jan 31 08:00:05 compute-0 bash[96922]: 600fe0d252669b010151658572ae8f3e34ce5cf806819c48f355f71862bcc99a
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:00:05 compute-0 ceph-mds[96942]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:00:05 compute-0 ceph-mds[96942]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 31 08:00:05 compute-0 ceph-mds[96942]: main not setting numa affinity
Jan 31 08:00:05 compute-0 ceph-mds[96942]: pidfile_write: ignore empty --pid-file
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 08:00:05 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mds-cephfs-compute-0-xdvglw[96938]: starting mds.cephfs.compute-0.xdvglw at 
Jan 31 08:00:05 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.xdvglw for dc03f344-536f-5591-add9-31059f42637c.
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 08:00:05 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw Updating MDS map to version 2 from mon.0
Jan 31 08:00:05 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:05 compute-0 sudo[96040]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:05 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev bb2cf1a1-1b57-4519-88c7-c1f7a73b12a9 (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 08:00:05 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event bb2cf1a1-1b57-4519-88c7-c1f7a73b12a9 (Updating mds.cephfs deployment (+1 -> 1)) in 7 seconds
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 08:00:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:05 compute-0 sudo[96962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:00:05 compute-0 sudo[96962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:05 compute-0 sudo[96962]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:05 compute-0 sudo[97022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yazhxemlhisnxvwivevglsymbdmherez ; /usr/bin/python3'
Jan 31 08:00:05 compute-0 sudo[97022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:06 compute-0 sudo[96995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:06 compute-0 sudo[96995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:06 compute-0 sudo[96995]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:06 compute-0 sudo[97038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:00:06 compute-0 sudo[97038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:06 compute-0 python3[97035]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:06 compute-0 podman[97063]: 2026-01-31 08:00:06.175959752 +0000 UTC m=+0.038746872 container create 201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa (image=quay.io/ceph/ceph:v20, name=flamboyant_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:00:06 compute-0 systemd[1]: Started libpod-conmon-201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa.scope.
Jan 31 08:00:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818472c10c28d8d355c2688ca752841ef7e2789498b411c14d3dc83e644ca42f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818472c10c28d8d355c2688ca752841ef7e2789498b411c14d3dc83e644ca42f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:06 compute-0 podman[97063]: 2026-01-31 08:00:06.253518377 +0000 UTC m=+0.116305527 container init 201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa (image=quay.io/ceph/ceph:v20, name=flamboyant_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:00:06 compute-0 podman[97063]: 2026-01-31 08:00:06.158964071 +0000 UTC m=+0.021751221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:00:06 compute-0 podman[97063]: 2026-01-31 08:00:06.259388256 +0000 UTC m=+0.122175386 container start 201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa (image=quay.io/ceph/ceph:v20, name=flamboyant_hofstadter, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:06 compute-0 podman[97063]: 2026-01-31 08:00:06.263692193 +0000 UTC m=+0.126479383 container attach 201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa (image=quay.io/ceph/ceph:v20, name=flamboyant_hofstadter, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:00:06 compute-0 podman[97128]: 2026-01-31 08:00:06.408223995 +0000 UTC m=+0.065476328 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:00:06 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 45 pg[10.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [2] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:06 compute-0 podman[97128]: 2026-01-31 08:00:06.504076286 +0000 UTC m=+0.161328589 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:06 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:00:06 compute-0 flamboyant_hofstadter[97078]: 
Jan 31 08:00:06 compute-0 flamboyant_hofstadter[97078]: [{"container_id": "1e3014b21f61", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.12%", "created": "2026-01-31T07:57:37.651778Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T07:57:38.669748Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:59:46.993112Z", "memory_usage": 7812939, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-31T07:57:36.944904Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dc03f344-536f-5591-add9-31059f42637c@crash.compute-0", "version": "20.2.0"}, {"daemon_id": "cephfs.compute-0.xdvglw", "daemon_name": "mds.cephfs.compute-0.xdvglw", "daemon_type": "mds", "events": ["2026-01-31T08:00:05.887146Z daemon:mds.cephfs.compute-0.xdvglw [INFO] \"Deployed mds.cephfs.compute-0.xdvglw on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "81f4bb2dc444", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "10.81%", "created": "2026-01-31T07:56:31.263034Z", "daemon_id": "compute-0.lhuavc", "daemon_name": "mgr.compute-0.lhuavc", "daemon_type": "mgr", "events": ["2026-01-31T07:57:55.695117Z daemon:mgr.compute-0.lhuavc [INFO] \"Reconfigured mgr.compute-0.lhuavc on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:59:46.993045Z", "memory_usage": 548929536, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-31T07:56:30.966360Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dc03f344-536f-5591-add9-31059f42637c@mgr.compute-0.lhuavc", "version": "20.2.0"}, {"container_id": "46fb178204c1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "1.78%", "created": "2026-01-31T07:56:24.117847Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T07:57:54.446982Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:59:46.992951Z", "memory_request": 2147483648, "memory_usage": 42331013, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-31T07:56:27.905891Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dc03f344-536f-5591-add9-31059f42637c@mon.compute-0", "version": "20.2.0"}, {"container_id": "4e80450fb78b", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.00%", "created": "2026-01-31T07:58:21.498951Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-31T07:58:21.592506Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:59:46.993211Z", "memory_request": 4294967296, "memory_usage": 57126420, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T07:58:21.415894Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dc03f344-536f-5591-add9-31059f42637c@osd.0", "version": "20.2.0"}, {"container_id": "b3da993d541d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.16%", "created": "2026-01-31T07:58:25.222063Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-31T07:58:25.371770Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:59:46.993278Z", "memory_request": 4294967296, "memory_usage": 59129200, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T07:58:25.079505Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dc03f344-536f-5591-add9-31059f42637c@osd.1", "version": "20.2.0"}, {"container_id": "f5583687da90", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.11%", "created": "2026-01-31T07:58:29.613194Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-31T07:58:29.823211Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:59:46.993341Z", "memory_request": 4294967296, "memory_usage": 56906219, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T07:58:29.375337Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dc03f344-536f-5591-add9-31059f42637c@osd.2", "version": "20.2.0"}, {"daemon_id": "rgw.compute-0.ockecq", "daemon_name": "rgw.rgw.compute-0.ockecq", "daemon_type": "rgw", "events": ["2026-01-31T07:59:58.484240Z daemon:rgw.rgw.compute-0.ockecq [INFO] \"Deployed rgw.rgw.compute-0.ockecq on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Jan 31 08:00:06 compute-0 podman[97063]: 2026-01-31 08:00:06.689047595 +0000 UTC m=+0.551834715 container died 201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa (image=quay.io/ceph/ceph:v20, name=flamboyant_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:06 compute-0 systemd[1]: libpod-201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa.scope: Deactivated successfully.
Jan 31 08:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-818472c10c28d8d355c2688ca752841ef7e2789498b411c14d3dc83e644ca42f-merged.mount: Deactivated successfully.
Jan 31 08:00:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v134: 180 pgs: 79 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s
Jan 31 08:00:06 compute-0 podman[97063]: 2026-01-31 08:00:06.782902442 +0000 UTC m=+0.645689562 container remove 201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa (image=quay.io/ceph/ceph:v20, name=flamboyant_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:00:06 compute-0 systemd[1]: libpod-conmon-201183d9593f59c5c4b8d56bd8be9fc73b292f19d8cf82b79748c9ea7cc95ffa.scope: Deactivated successfully.
Jan 31 08:00:06 compute-0 sudo[97022]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e3 new map
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-31T08:00:06:806886+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:59:40.548267+0000
                                           modified        2026-01-31T07:59:40.548268+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.xdvglw{-1:14262} state up:standby seq 1 addr [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] compat {c=[1],r=[1],i=[1fff]}]
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw Updating MDS map to version 3 from mon.0
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw Monitors have assigned me to become a standby
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] up:boot
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] as mds.0
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.xdvglw assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.xdvglw"} v 0)
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.xdvglw"} : dispatch
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e4 new map
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-31T08:00:06:821205+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:59:40.548267+0000
                                           modified        2026-01-31T08:00:06.821155+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14262}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.xdvglw{0:14262} state up:creating seq 1 addr [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw Updating MDS map to version 4 from mon.0
Jan 31 08:00:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x1
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x100
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x600
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x601
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x602
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x603
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x604
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x605
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x606
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x607
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x608
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.cache creating system inode with ino:0x609
Jan 31 08:00:06 compute-0 ceph-mon[75294]: 4.1f scrub ok
Jan 31 08:00:06 compute-0 ceph-mon[75294]: 2.1e scrub starts
Jan 31 08:00:06 compute-0 ceph-mon[75294]: 2.1e scrub ok
Jan 31 08:00:06 compute-0 ceph-mon[75294]: osdmap e45: 3 total, 3 up, 3 in
Jan 31 08:00:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 08:00:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.xdvglw=up:creating}
Jan 31 08:00:06 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 46 pg[10.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [2] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:06 compute-0 ceph-mds[96942]: mds.0.4 creating_done
Jan 31 08:00:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.xdvglw is now active in filesystem cephfs as rank 0
Jan 31 08:00:07 compute-0 sudo[97038]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:07 compute-0 sudo[97356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:07 compute-0 sudo[97356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:07 compute-0 sudo[97356]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:07 compute-0 sudo[97381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:00:07 compute-0 sudo[97381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:07 compute-0 podman[97418]: 2026-01-31 08:00:07.566165228 +0000 UTC m=+0.050252194 container create 0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:07 compute-0 sudo[97455]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnsgfswsrgnxjvbunaygrgdtsmfkupnd ; /usr/bin/python3'
Jan 31 08:00:07 compute-0 sudo[97455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:07 compute-0 systemd[1]: Started libpod-conmon-0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c.scope.
Jan 31 08:00:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:07 compute-0 podman[97418]: 2026-01-31 08:00:07.536323518 +0000 UTC m=+0.020410504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:07 compute-0 podman[97418]: 2026-01-31 08:00:07.658567186 +0000 UTC m=+0.142654172 container init 0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:00:07 compute-0 podman[97418]: 2026-01-31 08:00:07.664085396 +0000 UTC m=+0.148172352 container start 0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:07 compute-0 sad_bell[97460]: 167 167
Jan 31 08:00:07 compute-0 systemd[1]: libpod-0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c.scope: Deactivated successfully.
Jan 31 08:00:07 compute-0 conmon[97460]: conmon 0e0412e5d51e93e06553 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c.scope/container/memory.events
Jan 31 08:00:07 compute-0 podman[97418]: 2026-01-31 08:00:07.670680214 +0000 UTC m=+0.154767190 container attach 0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:07 compute-0 podman[97418]: 2026-01-31 08:00:07.671132607 +0000 UTC m=+0.155219573 container died 0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:07 compute-0 python3[97459]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2812fb8e3211ebbc740756f64867c185f60aca784e78951bcca9f20430255900-merged.mount: Deactivated successfully.
Jan 31 08:00:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 31 08:00:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 31 08:00:07 compute-0 podman[97418]: 2026-01-31 08:00:07.784430892 +0000 UTC m=+0.268517848 container remove 0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:07 compute-0 systemd[1]: libpod-conmon-0e0412e5d51e93e0655320b1cd681126ad37f21bd40b20984a64673683960a3c.scope: Deactivated successfully.
Jan 31 08:00:07 compute-0 podman[97476]: 2026-01-31 08:00:07.808898135 +0000 UTC m=+0.065595671 container create f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad (image=quay.io/ceph/ceph:v20, name=great_wu, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e5 new map
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-31T08:00:07:828568+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:59:40.548267+0000
                                           modified        2026-01-31T08:00:07.828565+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14262}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14262 members: 14262
                                           [mds.cephfs.compute-0.xdvglw{0:14262} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 31 08:00:07 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw Updating MDS map to version 5 from mon.0
Jan 31 08:00:07 compute-0 ceph-mds[96942]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 08:00:07 compute-0 ceph-mds[96942]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 31 08:00:07 compute-0 ceph-mds[96942]: mds.0.4 recovery_done -- successful recovery!
Jan 31 08:00:07 compute-0 ceph-mds[96942]: mds.0.4 active_start
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] up:active
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.xdvglw=up:active}
Jan 31 08:00:07 compute-0 systemd[1]: Started libpod-conmon-f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad.scope.
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 08:00:07 compute-0 podman[97476]: 2026-01-31 08:00:07.765565339 +0000 UTC m=+0.022262895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: pgmap v134: 180 pgs: 79 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mds.? [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] up:boot
Jan 31 08:00:07 compute-0 ceph-mon[75294]: daemon mds.cephfs.compute-0.xdvglw assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: Cluster is now healthy
Jan 31 08:00:07 compute-0 ceph-mon[75294]: fsmap cephfs:0 1 up:standby
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.xdvglw"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 08:00:07 compute-0 ceph-mon[75294]: osdmap e46: 3 total, 3 up, 3 in
Jan 31 08:00:07 compute-0 ceph-mon[75294]: fsmap cephfs:1 {0=cephfs.compute-0.xdvglw=up:creating}
Jan 31 08:00:07 compute-0 ceph-mon[75294]: daemon mds.cephfs.compute-0.xdvglw is now active in filesystem cephfs as rank 0
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:00:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ba12a8619de675924a431f7887da921417eaccbf342ea772bf5be6c559d8e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ba12a8619de675924a431f7887da921417eaccbf342ea772bf5be6c559d8e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 08:00:07 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 47 pg[11.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 31 08:00:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 08:00:07 compute-0 podman[97476]: 2026-01-31 08:00:07.900086519 +0000 UTC m=+0.156784085 container init f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad (image=quay.io/ceph/ceph:v20, name=great_wu, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:00:07 compute-0 podman[97476]: 2026-01-31 08:00:07.905528387 +0000 UTC m=+0.162225923 container start f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad (image=quay.io/ceph/ceph:v20, name=great_wu, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:00:07 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 31 08:00:07 compute-0 podman[97476]: 2026-01-31 08:00:07.917442221 +0000 UTC m=+0.174139757 container attach f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad (image=quay.io/ceph/ceph:v20, name=great_wu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:00:07 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 31 08:00:07 compute-0 podman[97505]: 2026-01-31 08:00:07.959373839 +0000 UTC m=+0.088217985 container create a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:00:08 compute-0 podman[97505]: 2026-01-31 08:00:07.90747112 +0000 UTC m=+0.036315386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:08 compute-0 systemd[1]: Started libpod-conmon-a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2.scope.
Jan 31 08:00:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28304c02aa908b437866d1750f4f14df027fca236b9b4b6e9292b1e7af6945ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28304c02aa908b437866d1750f4f14df027fca236b9b4b6e9292b1e7af6945ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28304c02aa908b437866d1750f4f14df027fca236b9b4b6e9292b1e7af6945ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28304c02aa908b437866d1750f4f14df027fca236b9b4b6e9292b1e7af6945ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28304c02aa908b437866d1750f4f14df027fca236b9b4b6e9292b1e7af6945ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:08 compute-0 podman[97505]: 2026-01-31 08:00:08.055443325 +0000 UTC m=+0.184287491 container init a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_satoshi, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:08 compute-0 podman[97505]: 2026-01-31 08:00:08.063172646 +0000 UTC m=+0.192016792 container start a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_satoshi, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:00:08 compute-0 podman[97505]: 2026-01-31 08:00:08.067818081 +0000 UTC m=+0.196662317 container attach a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:00:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 08:00:08 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327382741' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:00:08 compute-0 great_wu[97498]: 
Jan 31 08:00:08 compute-0 great_wu[97498]: {"fsid":"dc03f344-536f-5591-add9-31059f42637c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":219,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":47,"num_osds":3,"num_up_osds":3,"osd_up_since":1769846318,"num_in_osds":3,"osd_in_since":1769846291,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":101},{"state_name":"unknown","count":79}],"num_pgs":180,"num_pools":10,"num_objects":8,"data_bytes":460960,"bytes_used":84221952,"bytes_avail":64327704576,"bytes_total":64411926528,"unknown_pgs_ratio":0.43888887763023376,"read_bytes_sec":1088,"write_bytes_sec":1524,"read_op_per_sec":0,"write_op_per_sec":2},"fsmap":{"epoch":5,"btime":"2026-01-31T08:00:07:828568+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.xdvglw","status":"up:active","gid":14262}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":6,"modified":"2026-01-31T07:59:38.765661+0000","services":{}},"progress_events":{"a52e221b-5ba9-484e-8919-9d9b11a96ab4":{"message":"Global Recovery Event (5s)\n      [===============.............] (remaining: 3s)","progress":0.56111109256744385,"add_to_ceph_s":true}}}
Jan 31 08:00:08 compute-0 romantic_satoshi[97541]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:00:08 compute-0 romantic_satoshi[97541]: --> All data devices are unavailable
Jan 31 08:00:08 compute-0 systemd[1]: libpod-f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad.scope: Deactivated successfully.
Jan 31 08:00:08 compute-0 podman[97476]: 2026-01-31 08:00:08.453215749 +0000 UTC m=+0.709913285 container died f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad (image=quay.io/ceph/ceph:v20, name=great_wu, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:00:08 compute-0 systemd[1]: libpod-a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2.scope: Deactivated successfully.
Jan 31 08:00:08 compute-0 podman[97505]: 2026-01-31 08:00:08.494836259 +0000 UTC m=+0.623680435 container died a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-28304c02aa908b437866d1750f4f14df027fca236b9b4b6e9292b1e7af6945ec-merged.mount: Deactivated successfully.
Jan 31 08:00:08 compute-0 podman[97505]: 2026-01-31 08:00:08.61723163 +0000 UTC m=+0.746075776 container remove a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_satoshi, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:08 compute-0 systemd[1]: libpod-conmon-a3f153ee7c90e72d3ac7ca70e2a9d1c5e68ffe394ab8437a5ff2f9f4a81983c2.scope: Deactivated successfully.
Jan 31 08:00:08 compute-0 sudo[97381]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ba12a8619de675924a431f7887da921417eaccbf342ea772bf5be6c559d8e9-merged.mount: Deactivated successfully.
Jan 31 08:00:08 compute-0 podman[97476]: 2026-01-31 08:00:08.684844106 +0000 UTC m=+0.941541642 container remove f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad (image=quay.io/ceph/ceph:v20, name=great_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:00:08 compute-0 systemd[1]: libpod-conmon-f7ea54247fdc378246f50525165b00535b27eb44c9fb9e4e111b306d78b324ad.scope: Deactivated successfully.
Jan 31 08:00:08 compute-0 sudo[97455]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:08 compute-0 sudo[97590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:08 compute-0 sudo[97590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:08 compute-0 sudo[97590]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:08 compute-0 sudo[97615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:00:08 compute-0 sudo[97615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v137: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 3.9 KiB/s wr, 12 op/s
Jan 31 08:00:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 08:00:08 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 08:00:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 08:00:08 compute-0 ceph-mon[75294]: 3.8 scrub starts
Jan 31 08:00:08 compute-0 ceph-mon[75294]: 3.8 scrub ok
Jan 31 08:00:08 compute-0 ceph-mon[75294]: mds.? [v2:192.168.122.100:6814/2119598981,v1:192.168.122.100:6815/2119598981] up:active
Jan 31 08:00:08 compute-0 ceph-mon[75294]: fsmap cephfs:1 {0=cephfs.compute-0.xdvglw=up:active}
Jan 31 08:00:08 compute-0 ceph-mon[75294]: osdmap e47: 3 total, 3 up, 3 in
Jan 31 08:00:08 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 08:00:08 compute-0 ceph-mon[75294]: 4.8 scrub starts
Jan 31 08:00:08 compute-0 ceph-mon[75294]: 4.8 scrub ok
Jan 31 08:00:08 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2327382741' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:00:08 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 31 08:00:08 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 31 08:00:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 48 pg[11.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:08 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 08:00:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 31 08:00:08 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 08:00:09 compute-0 podman[97651]: 2026-01-31 08:00:09.101615005 +0000 UTC m=+0.110396986 container create ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:00:09 compute-0 podman[97651]: 2026-01-31 08:00:09.009492076 +0000 UTC m=+0.018274017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:09 compute-0 systemd[1]: Started libpod-conmon-ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54.scope.
Jan 31 08:00:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:09 compute-0 podman[97651]: 2026-01-31 08:00:09.252270293 +0000 UTC m=+0.261052254 container init ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:00:09 compute-0 podman[97651]: 2026-01-31 08:00:09.256437547 +0000 UTC m=+0.265219478 container start ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_montalcini, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:09 compute-0 keen_montalcini[97667]: 167 167
Jan 31 08:00:09 compute-0 systemd[1]: libpod-ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54.scope: Deactivated successfully.
Jan 31 08:00:09 compute-0 podman[97651]: 2026-01-31 08:00:09.274846576 +0000 UTC m=+0.283628507 container attach ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:09 compute-0 podman[97651]: 2026-01-31 08:00:09.275432962 +0000 UTC m=+0.284214903 container died ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_montalcini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbe08440cb4499ad2283940c62374fec9d9142a037fbe05a0bded62c7d57d609-merged.mount: Deactivated successfully.
Jan 31 08:00:09 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 31 08:00:09 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 31 08:00:09 compute-0 podman[97651]: 2026-01-31 08:00:09.427277823 +0000 UTC m=+0.436059754 container remove ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:00:09 compute-0 systemd[1]: libpod-conmon-ef74f20b0d713652adca61fbc5983217b7cd390b0d6f09682b9ae0aa12353b54.scope: Deactivated successfully.
Jan 31 08:00:09 compute-0 sudo[97722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sebhiwcqxblouvoiqwuxpudcrnfevroe ; /usr/bin/python3'
Jan 31 08:00:09 compute-0 sudo[97722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:09 compute-0 podman[97710]: 2026-01-31 08:00:09.582452224 +0000 UTC m=+0.076826256 container create b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:00:09 compute-0 podman[97710]: 2026-01-31 08:00:09.526120715 +0000 UTC m=+0.020494857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:09 compute-0 systemd[1]: Started libpod-conmon-b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d.scope.
Jan 31 08:00:09 compute-0 python3[97729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39bd70d4a5384a90f0deab17208bde40d2869d1fadeb7a3f145cc69f4501ba1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39bd70d4a5384a90f0deab17208bde40d2869d1fadeb7a3f145cc69f4501ba1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39bd70d4a5384a90f0deab17208bde40d2869d1fadeb7a3f145cc69f4501ba1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39bd70d4a5384a90f0deab17208bde40d2869d1fadeb7a3f145cc69f4501ba1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:09 compute-0 podman[97710]: 2026-01-31 08:00:09.722016321 +0000 UTC m=+0.216390403 container init b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:00:09 compute-0 podman[97710]: 2026-01-31 08:00:09.728284761 +0000 UTC m=+0.222658793 container start b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_herschel, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:00:09 compute-0 podman[97710]: 2026-01-31 08:00:09.751475781 +0000 UTC m=+0.245849843 container attach b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:00:09 compute-0 podman[97739]: 2026-01-31 08:00:09.729914255 +0000 UTC m=+0.032366069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:00:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 08:00:09 compute-0 podman[97739]: 2026-01-31 08:00:09.973237748 +0000 UTC m=+0.275689552 container create aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8 (image=quay.io/ceph/ceph:v20, name=pensive_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:00:09 compute-0 musing_herschel[97736]: {
Jan 31 08:00:09 compute-0 musing_herschel[97736]:     "0": [
Jan 31 08:00:09 compute-0 musing_herschel[97736]:         {
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "devices": [
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "/dev/loop3"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             ],
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_name": "ceph_lv0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_size": "21470642176",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "name": "ceph_lv0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "tags": {
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cluster_name": "ceph",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.crush_device_class": "",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.encrypted": "0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.objectstore": "bluestore",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osd_id": "0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.type": "block",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.vdo": "0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.with_tpm": "0"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             },
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "type": "block",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "vg_name": "ceph_vg0"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:         }
Jan 31 08:00:09 compute-0 musing_herschel[97736]:     ],
Jan 31 08:00:09 compute-0 musing_herschel[97736]:     "1": [
Jan 31 08:00:09 compute-0 musing_herschel[97736]:         {
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "devices": [
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "/dev/loop4"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             ],
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_name": "ceph_lv1",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_size": "21470642176",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "name": "ceph_lv1",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "tags": {
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cluster_name": "ceph",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.crush_device_class": "",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.encrypted": "0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.objectstore": "bluestore",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osd_id": "1",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.type": "block",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.vdo": "0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.with_tpm": "0"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             },
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "type": "block",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "vg_name": "ceph_vg1"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:         }
Jan 31 08:00:09 compute-0 musing_herschel[97736]:     ],
Jan 31 08:00:09 compute-0 musing_herschel[97736]:     "2": [
Jan 31 08:00:09 compute-0 musing_herschel[97736]:         {
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "devices": [
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "/dev/loop5"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             ],
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_name": "ceph_lv2",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_size": "21470642176",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "name": "ceph_lv2",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "tags": {
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.cluster_name": "ceph",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.crush_device_class": "",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.encrypted": "0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.objectstore": "bluestore",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osd_id": "2",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.type": "block",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.vdo": "0",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:                 "ceph.with_tpm": "0"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             },
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "type": "block",
Jan 31 08:00:09 compute-0 musing_herschel[97736]:             "vg_name": "ceph_vg2"
Jan 31 08:00:09 compute-0 musing_herschel[97736]:         }
Jan 31 08:00:09 compute-0 musing_herschel[97736]:     ]
Jan 31 08:00:09 compute-0 musing_herschel[97736]: }
Jan 31 08:00:09 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 08:00:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 08:00:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 08:00:10 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 08:00:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 08:00:10 compute-0 podman[97710]: 2026-01-31 08:00:10.021017355 +0000 UTC m=+0.515391387 container died b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:00:10 compute-0 systemd[1]: Started libpod-conmon-aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8.scope.
Jan 31 08:00:10 compute-0 systemd[1]: libpod-b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d.scope: Deactivated successfully.
Jan 31 08:00:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebe478b8cd17cdbd9550b353426c93c8064fea496234c4afbed014e6108538/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebe478b8cd17cdbd9550b353426c93c8064fea496234c4afbed014e6108538/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:10 compute-0 ceph-mon[75294]: pgmap v137: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 3.9 KiB/s wr, 12 op/s
Jan 31 08:00:10 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 08:00:10 compute-0 ceph-mon[75294]: 4.1c scrub starts
Jan 31 08:00:10 compute-0 ceph-mon[75294]: 4.1c scrub ok
Jan 31 08:00:10 compute-0 ceph-mon[75294]: osdmap e48: 3 total, 3 up, 3 in
Jan 31 08:00:10 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 08:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-39bd70d4a5384a90f0deab17208bde40d2869d1fadeb7a3f145cc69f4501ba1b-merged.mount: Deactivated successfully.
Jan 31 08:00:10 compute-0 podman[97710]: 2026-01-31 08:00:10.610913853 +0000 UTC m=+1.105287905 container remove b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:10 compute-0 systemd[1]: libpod-conmon-b42c982bc19a2d9bbc8ca00f9fa249205826b65df2592ffa793f53cefbcf749d.scope: Deactivated successfully.
Jan 31 08:00:10 compute-0 sudo[97615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:10 compute-0 sudo[97775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:10 compute-0 sudo[97775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:10 compute-0 sudo[97775]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:10 compute-0 podman[97739]: 2026-01-31 08:00:10.72168927 +0000 UTC m=+1.024141084 container init aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8 (image=quay.io/ceph/ceph:v20, name=pensive_aryabhata, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:00:10 compute-0 podman[97739]: 2026-01-31 08:00:10.726956342 +0000 UTC m=+1.029408136 container start aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8 (image=quay.io/ceph/ceph:v20, name=pensive_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:00:10 compute-0 sudo[97800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:00:10 compute-0 sudo[97800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v140: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Jan 31 08:00:10 compute-0 podman[97739]: 2026-01-31 08:00:10.820813639 +0000 UTC m=+1.123265433 container attach aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8 (image=quay.io/ceph/ceph:v20, name=pensive_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:00:10 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 11 completed events
Jan 31 08:00:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:00:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:11 compute-0 podman[97858]: 2026-01-31 08:00:10.971459837 +0000 UTC m=+0.019316985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:11 compute-0 podman[97858]: 2026-01-31 08:00:11.090997371 +0000 UTC m=+0.138854529 container create dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mclean, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:11 compute-0 systemd[1]: Started libpod-conmon-dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a.scope.
Jan 31 08:00:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 08:00:11 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461810174' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:00:11 compute-0 pensive_aryabhata[97766]: 
Jan 31 08:00:11 compute-0 pensive_aryabhata[97766]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.ockecq","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 31 08:00:11 compute-0 podman[97739]: 2026-01-31 08:00:11.1794276 +0000 UTC m=+1.481879404 container died aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8 (image=quay.io/ceph/ceph:v20, name=pensive_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:00:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:11 compute-0 systemd[1]: libpod-aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8.scope: Deactivated successfully.
Jan 31 08:00:11 compute-0 ceph-mon[75294]: 2.b scrub starts
Jan 31 08:00:11 compute-0 ceph-mon[75294]: 2.b scrub ok
Jan 31 08:00:11 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1105333339' entity='client.rgw.rgw.compute-0.ockecq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 08:00:11 compute-0 ceph-mon[75294]: 4.1e scrub starts
Jan 31 08:00:11 compute-0 ceph-mon[75294]: osdmap e49: 3 total, 3 up, 3 in
Jan 31 08:00:11 compute-0 ceph-mon[75294]: 4.1e scrub ok
Jan 31 08:00:11 compute-0 ceph-mon[75294]: pgmap v140: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Jan 31 08:00:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:11 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2461810174' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-63ebe478b8cd17cdbd9550b353426c93c8064fea496234c4afbed014e6108538-merged.mount: Deactivated successfully.
Jan 31 08:00:11 compute-0 podman[97739]: 2026-01-31 08:00:11.342986559 +0000 UTC m=+1.645438353 container remove aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8 (image=quay.io/ceph/ceph:v20, name=pensive_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:11 compute-0 systemd[1]: libpod-conmon-aef274248ac3de19f597db779ed66b131a47e02c611ebce1d6337285df8231a8.scope: Deactivated successfully.
Jan 31 08:00:11 compute-0 sudo[97722]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:11 compute-0 podman[97858]: 2026-01-31 08:00:11.377371382 +0000 UTC m=+0.425228520 container init dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mclean, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:00:11 compute-0 podman[97858]: 2026-01-31 08:00:11.383058596 +0000 UTC m=+0.430915734 container start dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mclean, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:00:11 compute-0 interesting_mclean[97874]: 167 167
Jan 31 08:00:11 compute-0 systemd[1]: libpod-dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a.scope: Deactivated successfully.
Jan 31 08:00:11 compute-0 podman[97858]: 2026-01-31 08:00:11.388290889 +0000 UTC m=+0.436148047 container attach dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:11 compute-0 podman[97858]: 2026-01-31 08:00:11.389761389 +0000 UTC m=+0.437618517 container died dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d6a96a976836ceb6a4a58813183a7a22c39c7b8c2c23db1fcc56a8357d98ffb-merged.mount: Deactivated successfully.
Jan 31 08:00:11 compute-0 podman[97858]: 2026-01-31 08:00:11.432446746 +0000 UTC m=+0.480303874 container remove dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:00:11 compute-0 systemd[1]: libpod-conmon-dd0193c87c264fef1b48168c596e311552f9253f59e391ca76f0a2930e2f4b7a.scope: Deactivated successfully.
Jan 31 08:00:11 compute-0 radosgw[95921]: v1 topic migration: starting v1 topic migration..
Jan 31 08:00:11 compute-0 radosgw[95921]: v1 topic migration: finished v1 topic migration
Jan 31 08:00:11 compute-0 radosgw[95921]: framework: beast
Jan 31 08:00:11 compute-0 radosgw[95921]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 08:00:11 compute-0 radosgw[95921]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 08:00:11 compute-0 radosgw[95921]: starting handler: beast
Jan 31 08:00:11 compute-0 radosgw[95921]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:00:11 compute-0 radosgw[95921]: mgrc service_daemon_register rgw.14258 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ockecq,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=385c601d-aa74-497b-a7bd-e523c543abf5,zone_name=default,zonegroup_id=d5fc270a-6d20-462b-9791-a635d54703e7,zonegroup_name=default}
Jan 31 08:00:11 compute-0 podman[97947]: 2026-01-31 08:00:11.565386144 +0000 UTC m=+0.043591424 container create 5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:00:11 compute-0 systemd[1]: Started libpod-conmon-5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf.scope.
Jan 31 08:00:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0515713b9c5dd0a7f98a806e5a1fd68f2d471707a2f7d761139cbb7e0f9a53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0515713b9c5dd0a7f98a806e5a1fd68f2d471707a2f7d761139cbb7e0f9a53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0515713b9c5dd0a7f98a806e5a1fd68f2d471707a2f7d761139cbb7e0f9a53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0515713b9c5dd0a7f98a806e5a1fd68f2d471707a2f7d761139cbb7e0f9a53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:11 compute-0 podman[97947]: 2026-01-31 08:00:11.548412344 +0000 UTC m=+0.026617644 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:11 compute-0 podman[97947]: 2026-01-31 08:00:11.656420504 +0000 UTC m=+0.134625774 container init 5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 08:00:11 compute-0 podman[97947]: 2026-01-31 08:00:11.661951345 +0000 UTC m=+0.140156615 container start 5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:11 compute-0 podman[97947]: 2026-01-31 08:00:11.668734969 +0000 UTC m=+0.146940249 container attach 5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:00:11 compute-0 ceph-mds[96942]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 08:00:11 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mds-cephfs-compute-0-xdvglw[96938]: 2026-01-31T08:00:11.832+0000 7f10c67e3640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 08:00:11 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 08:00:11 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 08:00:12 compute-0 sudo[98028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kouzatrdxhxnroayjpzgucdsgsutdilm ; /usr/bin/python3'
Jan 31 08:00:12 compute-0 sudo[98028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:12 compute-0 python3[98036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:12 compute-0 ceph-mon[75294]: 4.b scrub starts
Jan 31 08:00:12 compute-0 ceph-mon[75294]: 4.b scrub ok
Jan 31 08:00:12 compute-0 lvm[98079]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:00:12 compute-0 lvm[98078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:00:12 compute-0 lvm[98078]: VG ceph_vg0 finished
Jan 31 08:00:12 compute-0 lvm[98079]: VG ceph_vg1 finished
Jan 31 08:00:12 compute-0 podman[98066]: 2026-01-31 08:00:12.378824038 +0000 UTC m=+0.058946340 container create f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51 (image=quay.io/ceph/ceph:v20, name=modest_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:00:12 compute-0 lvm[98085]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:00:12 compute-0 lvm[98085]: VG ceph_vg2 finished
Jan 31 08:00:12 compute-0 systemd[1]: Started libpod-conmon-f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51.scope.
Jan 31 08:00:12 compute-0 podman[98066]: 2026-01-31 08:00:12.34976479 +0000 UTC m=+0.029887122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:00:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a585a0de7995ef7665a43b8c1f7cacdb7997609ee75bfafad3c9692b01232b8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a585a0de7995ef7665a43b8c1f7cacdb7997609ee75bfafad3c9692b01232b8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:12 compute-0 podman[98066]: 2026-01-31 08:00:12.470441564 +0000 UTC m=+0.150563896 container init f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51 (image=quay.io/ceph/ceph:v20, name=modest_driscoll, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:12 compute-0 podman[98066]: 2026-01-31 08:00:12.477548468 +0000 UTC m=+0.157670770 container start f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51 (image=quay.io/ceph/ceph:v20, name=modest_driscoll, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:12 compute-0 podman[98066]: 2026-01-31 08:00:12.481235657 +0000 UTC m=+0.161357959 container attach f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51 (image=quay.io/ceph/ceph:v20, name=modest_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:00:12 compute-0 intelligent_chatelet[97965]: {}
Jan 31 08:00:12 compute-0 systemd[1]: libpod-5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf.scope: Deactivated successfully.
Jan 31 08:00:12 compute-0 podman[97947]: 2026-01-31 08:00:12.535411068 +0000 UTC m=+1.013616358 container died 5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:12 compute-0 systemd[1]: libpod-5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf.scope: Consumed 1.163s CPU time.
Jan 31 08:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac0515713b9c5dd0a7f98a806e5a1fd68f2d471707a2f7d761139cbb7e0f9a53-merged.mount: Deactivated successfully.
Jan 31 08:00:12 compute-0 podman[97947]: 2026-01-31 08:00:12.578951799 +0000 UTC m=+1.057157069 container remove 5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chatelet, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:00:12 compute-0 systemd[1]: libpod-conmon-5f8a883a24092c50ffc3b1b9a83b439a4c0b03286f095bc03eae83793d91a1bf.scope: Deactivated successfully.
Jan 31 08:00:12 compute-0 sudo[97800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:12 compute-0 sudo[98127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:00:12 compute-0 sudo[98127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:12 compute-0 sudo[98127]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:00:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v141: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 13 KiB/s wr, 247 op/s
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 31 08:00:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605597100' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 08:00:12 compute-0 modest_driscoll[98088]: mimic
Jan 31 08:00:12 compute-0 systemd[1]: libpod-f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51.scope: Deactivated successfully.
Jan 31 08:00:12 compute-0 conmon[98088]: conmon f8fee132744d4e178708 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51.scope/container/memory.events
Jan 31 08:00:12 compute-0 podman[98066]: 2026-01-31 08:00:12.914093114 +0000 UTC m=+0.594215416 container died f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51 (image=quay.io/ceph/ceph:v20, name=modest_driscoll, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a585a0de7995ef7665a43b8c1f7cacdb7997609ee75bfafad3c9692b01232b8f-merged.mount: Deactivated successfully.
Jan 31 08:00:12 compute-0 sudo[98153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:12 compute-0 sudo[98153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:12 compute-0 sudo[98153]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:12 compute-0 podman[98066]: 2026-01-31 08:00:12.957495391 +0000 UTC m=+0.637617693 container remove f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51 (image=quay.io/ceph/ceph:v20, name=modest_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:00:12 compute-0 systemd[1]: libpod-conmon-f8fee132744d4e178708f0b30081d31857962dd5138ee3b8cd03688998c1ab51.scope: Deactivated successfully.
Jan 31 08:00:12 compute-0 sudo[98028]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:12 compute-0 sudo[98188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:00:12 compute-0 sudo[98188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:13 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 31 08:00:13 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 31 08:00:13 compute-0 podman[98257]: 2026-01-31 08:00:13.398789638 +0000 UTC m=+0.056995288 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:00:13 compute-0 podman[98257]: 2026-01-31 08:00:13.494050733 +0000 UTC m=+0.152256373 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:13 compute-0 ceph-mon[75294]: pgmap v141: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 13 KiB/s wr, 247 op/s
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:13 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1605597100' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 08:00:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 08:00:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:00:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 08:00:13 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.534450531s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393951416s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.483932495s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.343437195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.534417152s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393951416s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.534446716s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393997192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.483901024s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.343437195s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.483833313s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.343406677s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.483778954s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.343406677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.534333229s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.394050598s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505815506s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365547180s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.534318924s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.394050598s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505803108s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365547180s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533943176s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393814087s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533930779s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393814087s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533818245s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393707275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533806801s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393707275s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533665657s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393638611s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533601761s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393600464s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533655167s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393638611s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533591270s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393600464s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505254745s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365386963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505318642s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365386963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505243301s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365386963s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505073547s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365196228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505227089s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365386963s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533376694s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393592834s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505001068s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365196228s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533365250s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393592834s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505220413s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365463257s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505209923s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365463257s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533264160s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393547058s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.533253670s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393547058s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505386353s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365806580s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505211830s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365646362s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505369186s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365806580s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505199432s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365646362s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505013466s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365547180s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505001068s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365547180s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505086899s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365684509s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505076408s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365684509s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505170822s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365798950s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505158424s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365798950s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532745361s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393402100s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532588005s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393280029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532722473s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393402100s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532575607s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393280029s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532547951s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393280029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532538414s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393280029s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504927635s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365745544s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504920006s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365745544s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504854202s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365768433s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532353401s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393264771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504844666s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365768433s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532220840s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393157959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532331467s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393264771s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532207489s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393157959s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505009651s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365989685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.505000114s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365989685s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532125473s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393165588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.1b( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532113075s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393165588s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532811165s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393875122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504759789s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365852356s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532781601s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393875122s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504740715s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365852356s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504818916s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365989685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.531963348s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393142700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504810333s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365989685s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.531951904s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393142700s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504646301s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365890503s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532806396s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.394065857s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504713058s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365997314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.532796860s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.394065857s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504625320s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365890503s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504699707s) [2] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365997314s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.404935837s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.266288757s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.404922485s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.266288757s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504608154s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 123.365982056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.504595757s) [0] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 123.365982056s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.404793739s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.266235352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.404774666s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.266235352s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.534392357s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393997192s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.531047821s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 119.393959045s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=11.531023979s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 119.393959045s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.f( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.c( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.1( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.3( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.6( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.a( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.9( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.17( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.15( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.12( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[3.1f( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755992889s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.446044922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755958557s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.446044922s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755778313s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.446022034s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755760193s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.446022034s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755739212s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.446029663s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755695343s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.446029663s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755594254s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.446022034s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755576134s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.446022034s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755439758s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445999146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755423546s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445999146s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755422592s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.446029663s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755384445s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.446029663s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755393982s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.446083069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755376816s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.446083069s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.d( v 46'3 (0'0,46'3] local-lis/les=43/44 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.489100456s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'3 lcod 46'2 active pruub 127.179924011s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.d( v 46'3 (0'0,46'3] local-lis/les=43/44 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.489073753s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'3 lcod 46'2 unknown NOTIFY pruub 127.179924011s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755120277s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445999146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755084991s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445999146s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755038261s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.446037292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.755005836s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.446037292s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.f( v 46'5 (0'0,46'5] local-lis/les=43/44 n=3 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488685608s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'4 lcod 46'4 active pruub 127.179893494s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.754359245s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445594788s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.754345894s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445602417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.754344940s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445594788s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.754327774s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445602417s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.f( v 46'5 (0'0,46'5] local-lis/les=43/44 n=3 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488619804s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'4 lcod 46'4 unknown NOTIFY pruub 127.179893494s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.3( v 46'2 (0'0,46'2] local-lis/les=43/44 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488626480s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'2 lcod 46'1 active pruub 127.179962158s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.3( v 46'2 (0'0,46'2] local-lis/les=43/44 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488608360s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'2 lcod 46'1 unknown NOTIFY pruub 127.179962158s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488400459s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 127.179840088s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.754084587s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445533752s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.754075050s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445533752s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488382339s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 127.179840088s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.1e( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753941536s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445495605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753931999s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445495605s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753919601s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445526123s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753902435s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445526123s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753860474s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445533752s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.b( v 46'3 (0'0,46'3] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488116264s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'1 lcod 46'2 active pruub 127.179786682s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753849983s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445533752s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753746986s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445495605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.7( v 46'2 (0'0,46'2] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488040924s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'2 lcod 46'1 active pruub 127.179794312s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753738403s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445495605s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.7( v 46'2 (0'0,46'2] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488018990s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'2 lcod 46'1 unknown NOTIFY pruub 127.179794312s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753659248s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445487976s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753649712s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445487976s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.8( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488010406s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 active pruub 127.179924011s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753542900s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.445472717s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.488000870s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=0'0 unknown NOTIFY pruub 127.179924011s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.753527641s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.445472717s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.5( v 46'3 (0'0,46'3] local-lis/les=43/44 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.487687111s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'3 lcod 46'2 active pruub 127.179710388s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.5( v 46'3 (0'0,46'3] local-lis/les=43/44 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.487669945s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'3 lcod 46'2 unknown NOTIFY pruub 127.179710388s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.666900635s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.358985901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.666876793s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.358985901s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.666857719s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 active pruub 124.358985901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=40/42 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50 pruub=12.666846275s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 unknown NOTIFY pruub 124.358985901s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[6.b( v 46'3 (0'0,46'3] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50 pruub=15.487956047s) [1] r=-1 lpr=50 pi=[43,50)/1 crt=46'1 lcod 46'2 unknown NOTIFY pruub 127.179786682s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.7( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.5( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.16( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[3.18( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504758835s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956306458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504716873s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956306458s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476314545s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.927925110s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476284981s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.927925110s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504563332s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956283569s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476157188s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.927902222s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504543304s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956283569s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504487038s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956314087s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476106644s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.927902222s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504463196s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956314087s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504410744s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956291199s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504267693s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956291199s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504107475s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956237793s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476433754s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.928665161s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.504070282s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956237793s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476414680s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.928665161s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.503938675s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956268311s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.503916740s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956268311s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476215363s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.928665161s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476194382s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.928665161s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476239204s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.928726196s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476069450s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.928695679s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476053238s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.928695679s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.476224899s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.928726196s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.503330231s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956130981s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.503314972s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956130981s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.475919724s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.928810120s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.475902557s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.928810120s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.503325462s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956314087s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.503304482s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956314087s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.475678444s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.928749084s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.475665092s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.928749084s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502871513s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955993652s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502849579s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955993652s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.507212639s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960449219s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502574921s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955871582s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502554893s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955871582s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.507197380s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960449219s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502324104s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955863953s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506875038s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960472107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502263069s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955863953s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506854057s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960472107s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502135277s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955856323s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.502120972s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955856323s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506750107s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960578918s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501679420s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955535889s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506731987s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960578918s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501663208s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955535889s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506525993s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960472107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506510735s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960472107s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501471519s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955459595s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501452446s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955459595s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506564140s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960609436s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501318932s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955459595s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.474567413s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.928848267s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501148224s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955459595s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.474521637s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.928848267s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506201744s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960655212s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506187439s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960655212s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506549835s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960609436s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.506017685s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960700989s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.505999565s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960700989s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501254082s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955986023s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500605583s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955398560s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500591278s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955398560s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.501202583s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955986023s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500540733s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955459595s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500528336s) [1] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955459595s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500418663s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955375671s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500399590s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955375671s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500409126s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.955482483s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.505678177s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960762024s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500389099s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.955482483s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.505641937s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960762024s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.476566315s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.931747437s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.476543427s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.931747437s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.505575180s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960792542s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.505559921s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960792542s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.505472183s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960807800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.505459785s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960807800s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.476284981s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.931770325s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.476235390s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.931770325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.504952431s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 active pruub 118.960807800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/44 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=15.504932404s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 unknown NOTIFY pruub 118.960807800s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.503579140s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 active pruub 114.956146240s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50 pruub=11.500148773s) [0] r=-1 lpr=50 pi=[38,50)/1 crt=0'0 unknown NOTIFY pruub 114.956146240s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 sudo[98370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvinveytflommvhostznalgjpevkzadf ; /usr/bin/python3'
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.15( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.14( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.7( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.5( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.4( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.3( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.2( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.1b( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.1d( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.15( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.12( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.13( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.16( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.9( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[5.1e( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 50 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.11( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.1( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.f( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.c( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.1a( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.19( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 50 pg[5.18( empty local-lis/les=0/0 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:13 compute-0 sudo[98370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:13 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 31 08:00:13 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 31 08:00:13 compute-0 python3[98375]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:13 compute-0 podman[98414]: 2026-01-31 08:00:13.902825196 +0000 UTC m=+0.036574825 container create 33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0 (image=quay.io/ceph/ceph:v20, name=exciting_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:13 compute-0 systemd[1]: Started libpod-conmon-33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0.scope.
Jan 31 08:00:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200c0f93550bbff982a6f09ee1f40911836feb9e241cb153531fb181b48751e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200c0f93550bbff982a6f09ee1f40911836feb9e241cb153531fb181b48751e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:13 compute-0 podman[98414]: 2026-01-31 08:00:13.958139846 +0000 UTC m=+0.091889505 container init 33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0 (image=quay.io/ceph/ceph:v20, name=exciting_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:00:13 compute-0 podman[98414]: 2026-01-31 08:00:13.964079687 +0000 UTC m=+0.097829316 container start 33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0 (image=quay.io/ceph/ceph:v20, name=exciting_proskuriakova, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:00:13 compute-0 podman[98414]: 2026-01-31 08:00:13.968275311 +0000 UTC m=+0.102024960 container attach 33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0 (image=quay.io/ceph/ceph:v20, name=exciting_proskuriakova, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:00:13 compute-0 podman[98414]: 2026-01-31 08:00:13.886518853 +0000 UTC m=+0.020268512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:00:14 compute-0 sudo[98188]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:14 compute-0 sudo[98502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:14 compute-0 sudo[98502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:14 compute-0 sudo[98502]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:14 compute-0 sudo[98527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:00:14 compute-0 sudo[98527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:14 compute-0 podman[98564]: 2026-01-31 08:00:14.430204697 +0000 UTC m=+0.036943114 container create 4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hodgkin, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:00:14 compute-0 systemd[1]: Started libpod-conmon-4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b.scope.
Jan 31 08:00:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:14 compute-0 podman[98564]: 2026-01-31 08:00:14.487480111 +0000 UTC m=+0.094218558 container init 4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hodgkin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:00:14 compute-0 podman[98564]: 2026-01-31 08:00:14.491530781 +0000 UTC m=+0.098269188 container start 4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:14 compute-0 focused_hodgkin[98581]: 167 167
Jan 31 08:00:14 compute-0 systemd[1]: libpod-4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b.scope: Deactivated successfully.
Jan 31 08:00:14 compute-0 podman[98564]: 2026-01-31 08:00:14.494707187 +0000 UTC m=+0.101445604 container attach 4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hodgkin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:00:14 compute-0 podman[98564]: 2026-01-31 08:00:14.495072788 +0000 UTC m=+0.101811215 container died 4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607020235' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 08:00:14 compute-0 exciting_proskuriakova[98447]: 
Jan 31 08:00:14 compute-0 podman[98564]: 2026-01-31 08:00:14.41411183 +0000 UTC m=+0.020850267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-db7697199df7490ab2f19fd899b2b5bc3252d8755cc3862f739a56a49fe44e3a-merged.mount: Deactivated successfully.
Jan 31 08:00:14 compute-0 systemd[1]: libpod-33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0.scope: Deactivated successfully.
Jan 31 08:00:14 compute-0 exciting_proskuriakova[98447]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 31 08:00:14 compute-0 podman[98414]: 2026-01-31 08:00:14.526280674 +0000 UTC m=+0.660030323 container died 33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0 (image=quay.io/ceph/ceph:v20, name=exciting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:00:14 compute-0 podman[98564]: 2026-01-31 08:00:14.542820013 +0000 UTC m=+0.149558440 container remove 4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-200c0f93550bbff982a6f09ee1f40911836feb9e241cb153531fb181b48751e0-merged.mount: Deactivated successfully.
Jan 31 08:00:14 compute-0 podman[98414]: 2026-01-31 08:00:14.578133871 +0000 UTC m=+0.711883500 container remove 33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0 (image=quay.io/ceph/ceph:v20, name=exciting_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:00:14 compute-0 systemd[1]: libpod-conmon-4cd00cb720348c374cda8680ce3d27f582ef9ccbdccc7ce105a75efab60eef9b.scope: Deactivated successfully.
Jan 31 08:00:14 compute-0 sudo[98370]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:14 compute-0 systemd[1]: libpod-conmon-33572130b1199cbcd1a78f30ea0d857c74d54b81726c1d38568e2a42a5c00fb0.scope: Deactivated successfully.
Jan 31 08:00:14 compute-0 podman[98619]: 2026-01-31 08:00:14.660345613 +0000 UTC m=+0.038517917 container create 874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-mon[75294]: 2.1c scrub starts
Jan 31 08:00:14 compute-0 ceph-mon[75294]: 2.1c scrub ok
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:00:14 compute-0 ceph-mon[75294]: osdmap e50: 3 total, 3 up, 3 in
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.18( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-mon[75294]: 7.1d scrub starts
Jan 31 08:00:14 compute-0 ceph-mon[75294]: 7.1d scrub ok
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:00:14 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2607020235' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.14( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.11( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.12( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.13( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.1f( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.15( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.14( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.16( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.15( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.18( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.1c( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.16( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.11( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.13( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.11( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.e( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.a( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.1( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.7( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.5( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.e( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.8( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.1a( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[4.1b( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.1d( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 51 pg[3.1e( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 systemd[1]: Started libpod-conmon-874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1.scope.
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.10( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.13( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.12( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.12( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.15( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.17( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.16( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.9( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.8( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.9( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.b( v 46'3 lc 0'0 (0'0,46'3] local-lis/les=50/51 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=46'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.d( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.9( empty local-lis/les=50/51 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.c( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.f( v 46'5 lc 46'1 (0'0,46'5] local-lis/les=50/51 n=3 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=46'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.8( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.9( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.b( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.a( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.6( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.3( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.2( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.5( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.2( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.1f( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.3( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.4( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.11( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.f( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.1c( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.17( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.1d( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.7( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.1b( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.1( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.c( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[5.1e( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.19( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[3.f( empty local-lis/les=50/51 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 51 pg[2.18( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.d( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.5( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.7( v 46'2 lc 46'1 (0'0,46'2] local-lis/les=50/51 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=46'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.1d( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.3( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.7( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.5( v 46'3 lc 46'1 (0'0,46'3] local-lis/les=50/51 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=46'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.5( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.a( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.1( empty local-lis/les=50/51 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.4( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.4( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.2( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.7( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.6( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.9( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.3( v 46'2 lc 0'0 (0'0,46'2] local-lis/les=50/51 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=46'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.1( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[6.d( v 46'3 lc 46'1 (0'0,46'3] local-lis/les=50/51 n=2 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[43,50)/1 crt=46'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.f( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[4.f( empty local-lis/les=50/51 n=0 ec=40/22 lis/c=40/40 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[2.1b( empty local-lis/les=50/51 n=0 ec=38/19 lis/c=38/38 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.1a( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.19( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 51 pg[5.18( empty local-lis/les=50/51 n=0 ec=42/24 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444985ac96dbb308e6b6a772fb541c4ac833eb2355ccf2aa0e54701f84888434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444985ac96dbb308e6b6a772fb541c4ac833eb2355ccf2aa0e54701f84888434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444985ac96dbb308e6b6a772fb541c4ac833eb2355ccf2aa0e54701f84888434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444985ac96dbb308e6b6a772fb541c4ac833eb2355ccf2aa0e54701f84888434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444985ac96dbb308e6b6a772fb541c4ac833eb2355ccf2aa0e54701f84888434/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:14 compute-0 podman[98619]: 2026-01-31 08:00:14.737215148 +0000 UTC m=+0.115387472 container init 874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:00:14 compute-0 podman[98619]: 2026-01-31 08:00:14.643296869 +0000 UTC m=+0.021469203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:14 compute-0 podman[98619]: 2026-01-31 08:00:14.744734792 +0000 UTC m=+0.122907096 container start 874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:00:14 compute-0 podman[98619]: 2026-01-31 08:00:14.747827856 +0000 UTC m=+0.126000170 container attach 874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 08:00:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v144: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 11 KiB/s wr, 243 op/s
Jan 31 08:00:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:00:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:15 compute-0 exciting_euclid[98635]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:00:15 compute-0 exciting_euclid[98635]: --> All data devices are unavailable
Jan 31 08:00:15 compute-0 systemd[1]: libpod-874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1.scope: Deactivated successfully.
Jan 31 08:00:15 compute-0 podman[98619]: 2026-01-31 08:00:15.162184121 +0000 UTC m=+0.540356425 container died 874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-444985ac96dbb308e6b6a772fb541c4ac833eb2355ccf2aa0e54701f84888434-merged.mount: Deactivated successfully.
Jan 31 08:00:15 compute-0 podman[98619]: 2026-01-31 08:00:15.201329153 +0000 UTC m=+0.579501457 container remove 874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:00:15 compute-0 systemd[1]: libpod-conmon-874481918e80ccdbd18e0750a8905193b14a0ac5648813e8ed4f6f9603ad55b1.scope: Deactivated successfully.
Jan 31 08:00:15 compute-0 sudo[98527]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:15 compute-0 sudo[98665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:15 compute-0 sudo[98665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:15 compute-0 sudo[98665]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:15 compute-0 sudo[98690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:00:15 compute-0 sudo[98690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:15 compute-0 podman[98727]: 2026-01-31 08:00:15.658356765 +0000 UTC m=+0.116121883 container create 5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 31 08:00:15 compute-0 podman[98727]: 2026-01-31 08:00:15.564753924 +0000 UTC m=+0.022519072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 08:00:15 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:00:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 08:00:15 compute-0 systemd[1]: Started libpod-conmon-5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261.scope.
Jan 31 08:00:15 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.e( v 46'3 (0'0,46'3] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.484980583s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=46'2 lcod 46'2 active pruub 127.179847717s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.485017776s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 active pruub 127.179916382s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.e( v 46'3 (0'0,46'3] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.484914780s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=46'2 lcod 46'2 unknown NOTIFY pruub 127.179847717s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.6( v 47'1 (0'0,47'1] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.484792709s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=47'1 lcod 0'0 active pruub 127.179832458s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.6( v 47'1 (0'0,47'1] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.484774590s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=47'1 lcod 0'0 unknown NOTIFY pruub 127.179832458s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.484696388s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 unknown NOTIFY pruub 127.179916382s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.a( v 46'1 (0'0,46'1] local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.484010696s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 lcod 0'0 active pruub 127.179344177s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 52 pg[6.a( v 46'1 (0'0,46'1] local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=13.483940125s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 lcod 0'0 unknown NOTIFY pruub 127.179344177s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:15 compute-0 ceph-mon[75294]: osdmap e51: 3 total, 3 up, 3 in
Jan 31 08:00:15 compute-0 ceph-mon[75294]: pgmap v144: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 11 KiB/s wr, 243 op/s
Jan 31 08:00:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:00:15 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:15 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:15 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:15 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:15 compute-0 podman[98727]: 2026-01-31 08:00:15.727261105 +0000 UTC m=+0.185026243 container init 5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_albattani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:15 compute-0 podman[98727]: 2026-01-31 08:00:15.732046405 +0000 UTC m=+0.189811533 container start 5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_albattani, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:00:15 compute-0 wonderful_albattani[98743]: 167 167
Jan 31 08:00:15 compute-0 systemd[1]: libpod-5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261.scope: Deactivated successfully.
Jan 31 08:00:15 compute-0 podman[98727]: 2026-01-31 08:00:15.735713875 +0000 UTC m=+0.193479053 container attach 5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:15 compute-0 podman[98727]: 2026-01-31 08:00:15.736185167 +0000 UTC m=+0.193950295 container died 5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_albattani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:00:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ae48388e872c99ab30146061aabbe2119184657bd7a1a465c2a98b98cc7f971-merged.mount: Deactivated successfully.
Jan 31 08:00:15 compute-0 podman[98727]: 2026-01-31 08:00:15.770402445 +0000 UTC m=+0.228167563 container remove 5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_albattani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:00:15 compute-0 systemd[1]: libpod-conmon-5d6593ac0147eb09d5850a31126d3d994905a814554b225ed6faa7584d331261.scope: Deactivated successfully.
Jan 31 08:00:15 compute-0 podman[98766]: 2026-01-31 08:00:15.877541673 +0000 UTC m=+0.038253149 container create 8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:00:15 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event a52e221b-5ba9-484e-8919-9d9b11a96ab4 (Global Recovery Event) in 15 seconds
Jan 31 08:00:15 compute-0 systemd[1]: Started libpod-conmon-8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390.scope.
Jan 31 08:00:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac9c7e024af8f069ac9d0ecf73e64595edc95ee93d3ad25b7ddeb4ac6a7390f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac9c7e024af8f069ac9d0ecf73e64595edc95ee93d3ad25b7ddeb4ac6a7390f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac9c7e024af8f069ac9d0ecf73e64595edc95ee93d3ad25b7ddeb4ac6a7390f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac9c7e024af8f069ac9d0ecf73e64595edc95ee93d3ad25b7ddeb4ac6a7390f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:15 compute-0 podman[98766]: 2026-01-31 08:00:15.945597229 +0000 UTC m=+0.106308715 container init 8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:00:15 compute-0 podman[98766]: 2026-01-31 08:00:15.951236863 +0000 UTC m=+0.111948339 container start 8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:15 compute-0 podman[98766]: 2026-01-31 08:00:15.956043854 +0000 UTC m=+0.116755340 container attach 8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cohen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:00:15 compute-0 podman[98766]: 2026-01-31 08:00:15.860486081 +0000 UTC m=+0.021197567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:16 compute-0 admiring_cohen[98783]: {
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:     "0": [
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:         {
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "devices": [
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "/dev/loop3"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             ],
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_name": "ceph_lv0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_size": "21470642176",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "name": "ceph_lv0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "tags": {
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cluster_name": "ceph",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.crush_device_class": "",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.encrypted": "0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.objectstore": "bluestore",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osd_id": "0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.type": "block",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.vdo": "0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.with_tpm": "0"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             },
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "type": "block",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "vg_name": "ceph_vg0"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:         }
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:     ],
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:     "1": [
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:         {
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "devices": [
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "/dev/loop4"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             ],
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_name": "ceph_lv1",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_size": "21470642176",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "name": "ceph_lv1",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "tags": {
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cluster_name": "ceph",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.crush_device_class": "",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.encrypted": "0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.objectstore": "bluestore",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osd_id": "1",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.type": "block",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.vdo": "0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.with_tpm": "0"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             },
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "type": "block",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "vg_name": "ceph_vg1"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:         }
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:     ],
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:     "2": [
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:         {
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "devices": [
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "/dev/loop5"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             ],
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_name": "ceph_lv2",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_size": "21470642176",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "name": "ceph_lv2",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "tags": {
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.cluster_name": "ceph",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.crush_device_class": "",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.encrypted": "0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.objectstore": "bluestore",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osd_id": "2",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.type": "block",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.vdo": "0",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:                 "ceph.with_tpm": "0"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             },
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "type": "block",
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:             "vg_name": "ceph_vg2"
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:         }
Jan 31 08:00:16 compute-0 admiring_cohen[98783]:     ]
Jan 31 08:00:16 compute-0 admiring_cohen[98783]: }
Jan 31 08:00:16 compute-0 systemd[1]: libpod-8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390.scope: Deactivated successfully.
Jan 31 08:00:16 compute-0 podman[98792]: 2026-01-31 08:00:16.256549969 +0000 UTC m=+0.020307303 container died 8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac9c7e024af8f069ac9d0ecf73e64595edc95ee93d3ad25b7ddeb4ac6a7390f-merged.mount: Deactivated successfully.
Jan 31 08:00:16 compute-0 podman[98792]: 2026-01-31 08:00:16.302515955 +0000 UTC m=+0.066273289 container remove 8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cohen, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:16 compute-0 systemd[1]: libpod-conmon-8c9cf054a9f4701e2ce26085387019b11032b8dcb86a8cac7fff047c9dedb390.scope: Deactivated successfully.
Jan 31 08:00:16 compute-0 sudo[98690]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:16 compute-0 sudo[98807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:16 compute-0 sudo[98807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:16 compute-0 sudo[98807]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:16 compute-0 sudo[98832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:00:16 compute-0 sudo[98832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:16 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 08:00:16 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 08:00:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 08:00:16 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:00:16 compute-0 ceph-mon[75294]: osdmap e52: 3 total, 3 up, 3 in
Jan 31 08:00:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 08:00:16 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 08:00:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 53 pg[6.a( v 46'1 (0'0,46'1] local-lis/les=52/53 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=46'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 53 pg[6.e( v 46'3 lc 46'1 (0'0,46'3] local-lis/les=52/53 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=46'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 53 pg[6.6( v 47'1 lc 0'0 (0'0,47'1] local-lis/les=52/53 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=47'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 53 pg[6.2( empty local-lis/les=52/53 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:16 compute-0 podman[98869]: 2026-01-31 08:00:16.744837709 +0000 UTC m=+0.034966960 container create 73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:00:16 compute-0 systemd[1]: Started libpod-conmon-73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084.scope.
Jan 31 08:00:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v147: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 31 08:00:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 08:00:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:00:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:16 compute-0 podman[98869]: 2026-01-31 08:00:16.794270541 +0000 UTC m=+0.084399812 container init 73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lewin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:00:16 compute-0 podman[98869]: 2026-01-31 08:00:16.799798991 +0000 UTC m=+0.089928242 container start 73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lewin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:16 compute-0 sleepy_lewin[98886]: 167 167
Jan 31 08:00:16 compute-0 systemd[1]: libpod-73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084.scope: Deactivated successfully.
Jan 31 08:00:16 compute-0 podman[98869]: 2026-01-31 08:00:16.808446225 +0000 UTC m=+0.098575496 container attach 73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lewin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:16 compute-0 podman[98869]: 2026-01-31 08:00:16.809022531 +0000 UTC m=+0.099151782 container died 73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lewin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:16 compute-0 podman[98869]: 2026-01-31 08:00:16.730287104 +0000 UTC m=+0.020416395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f500adf9c7715f51f41abde99c80df7923840773cbbd60771e736120150af92f-merged.mount: Deactivated successfully.
Jan 31 08:00:16 compute-0 podman[98869]: 2026-01-31 08:00:16.903862454 +0000 UTC m=+0.193991715 container remove 73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:00:16 compute-0 systemd[1]: libpod-conmon-73f4ee96c9d5e661e730ee65f20505b6d1ecb46161c1999fc9af2ca9a873a084.scope: Deactivated successfully.
Jan 31 08:00:16 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 31 08:00:16 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 31 08:00:17 compute-0 podman[98912]: 2026-01-31 08:00:17.112952108 +0000 UTC m=+0.103706145 container create 6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 08:00:17 compute-0 podman[98912]: 2026-01-31 08:00:17.035708592 +0000 UTC m=+0.026462659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:00:17 compute-0 systemd[1]: Started libpod-conmon-6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df.scope.
Jan 31 08:00:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b911bf2af1fd2ef397e83a5ba36f89830f5576564dbc8e37d9aeb141fd9cd30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b911bf2af1fd2ef397e83a5ba36f89830f5576564dbc8e37d9aeb141fd9cd30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b911bf2af1fd2ef397e83a5ba36f89830f5576564dbc8e37d9aeb141fd9cd30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b911bf2af1fd2ef397e83a5ba36f89830f5576564dbc8e37d9aeb141fd9cd30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:17 compute-0 podman[98912]: 2026-01-31 08:00:17.249507884 +0000 UTC m=+0.240262011 container init 6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:00:17 compute-0 podman[98912]: 2026-01-31 08:00:17.257866851 +0000 UTC m=+0.248620938 container start 6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shaw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:00:17 compute-0 podman[98912]: 2026-01-31 08:00:17.293108788 +0000 UTC m=+0.283862825 container attach 6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shaw, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:00:17 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 31 08:00:17 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 31 08:00:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 08:00:17 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:00:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 08:00:17 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 08:00:17 compute-0 ceph-mon[75294]: 3.19 scrub starts
Jan 31 08:00:17 compute-0 ceph-mon[75294]: 3.19 scrub ok
Jan 31 08:00:17 compute-0 ceph-mon[75294]: osdmap e53: 3 total, 3 up, 3 in
Jan 31 08:00:17 compute-0 ceph-mon[75294]: pgmap v147: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 31 08:00:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:00:17 compute-0 ceph-mon[75294]: 4.17 scrub starts
Jan 31 08:00:17 compute-0 ceph-mon[75294]: 4.17 scrub ok
Jan 31 08:00:17 compute-0 lvm[99007]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:00:17 compute-0 lvm[99007]: VG ceph_vg1 finished
Jan 31 08:00:17 compute-0 lvm[99004]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:00:17 compute-0 lvm[99004]: VG ceph_vg0 finished
Jan 31 08:00:17 compute-0 lvm[99009]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:00:17 compute-0 lvm[99009]: VG ceph_vg2 finished
Jan 31 08:00:17 compute-0 lvm[99010]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:00:17 compute-0 lvm[99010]: VG ceph_vg0 finished
Jan 31 08:00:17 compute-0 admiring_shaw[98928]: {}
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.b( v 46'3 (0'0,46'3] local-lis/les=50/51 n=1 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.741026878s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'3 active pruub 124.870880127s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.b( v 46'3 (0'0,46'3] local-lis/les=50/51 n=1 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.740983009s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'3 unknown NOTIFY pruub 124.870880127s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.7( v 46'2 (0'0,46'2] local-lis/les=50/51 n=1 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.743014336s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'2 active pruub 124.872924805s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.7( v 46'2 (0'0,46'2] local-lis/les=50/51 n=1 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.742988586s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'2 unknown NOTIFY pruub 124.872924805s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.f( v 46'5 (0'0,46'5] local-lis/les=50/51 n=3 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.740921021s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'5 active pruub 124.871025085s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.f( v 46'5 (0'0,46'5] local-lis/les=50/51 n=3 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.740836143s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'5 unknown NOTIFY pruub 124.871025085s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.3( v 46'2 (0'0,46'2] local-lis/les=50/51 n=2 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.743093491s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'2 active pruub 124.873504639s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 54 pg[6.3( v 46'2 (0'0,46'2] local-lis/les=50/51 n=2 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=12.743060112s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=46'2 unknown NOTIFY pruub 124.873504639s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:17 compute-0 systemd[1]: libpod-6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df.scope: Deactivated successfully.
Jan 31 08:00:17 compute-0 podman[98912]: 2026-01-31 08:00:17.957227249 +0000 UTC m=+0.947981306 container died 6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:00:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b911bf2af1fd2ef397e83a5ba36f89830f5576564dbc8e37d9aeb141fd9cd30-merged.mount: Deactivated successfully.
Jan 31 08:00:18 compute-0 podman[98912]: 2026-01-31 08:00:18.093804256 +0000 UTC m=+1.084558313 container remove 6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shaw, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:18 compute-0 systemd[1]: libpod-conmon-6c21af07d84fd32b6c20e24b25295faf3787140bdce0295a9d6038987ce319df.scope: Deactivated successfully.
Jan 31 08:00:18 compute-0 sudo[98832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:00:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:00:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:18 compute-0 sudo[99027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:00:18 compute-0 sudo[99027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:18 compute-0 sudo[99027]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:18 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 31 08:00:18 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 31 08:00:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v149: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 197 B/s, 2 keys/s, 3 objects/s recovering
Jan 31 08:00:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 08:00:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:00:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 08:00:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:00:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 08:00:18 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.c( v 46'2 (0'0,46'2] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55 pruub=10.373520851s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=46'2 lcod 46'1 active pruub 127.179992676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:18 compute-0 ceph-mon[75294]: 7.12 scrub starts
Jan 31 08:00:18 compute-0 ceph-mon[75294]: 7.12 scrub ok
Jan 31 08:00:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:00:18 compute-0 ceph-mon[75294]: osdmap e54: 3 total, 3 up, 3 in
Jan 31 08:00:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.c( v 46'2 (0'0,46'2] local-lis/les=43/44 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55 pruub=10.373462677s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=46'2 lcod 46'1 unknown NOTIFY pruub 127.179992676s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 55 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.4( v 46'6 (0'0,46'6] local-lis/les=43/44 n=4 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55 pruub=10.372361183s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=46'6 lcod 46'5 active pruub 127.179779053s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.4( v 46'6 (0'0,46'6] local-lis/les=43/44 n=4 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55 pruub=10.372279167s) [1] r=-1 lpr=55 pi=[43,55)/1 crt=46'6 lcod 46'5 unknown NOTIFY pruub 127.179779053s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 55 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.3( v 46'2 lc 0'0 (0'0,46'2] local-lis/les=54/55 n=2 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=46'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.b( v 46'3 lc 0'0 (0'0,46'3] local-lis/les=54/55 n=1 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=46'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.7( v 46'2 lc 46'1 (0'0,46'2] local-lis/les=54/55 n=1 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=46'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 55 pg[6.f( v 46'5 lc 46'1 (0'0,46'5] local-lis/les=54/55 n=3 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=46'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:19 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 08:00:19 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 08:00:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 08:00:19 compute-0 ceph-mon[75294]: 7.10 scrub starts
Jan 31 08:00:19 compute-0 ceph-mon[75294]: 7.10 scrub ok
Jan 31 08:00:19 compute-0 ceph-mon[75294]: pgmap v149: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 197 B/s, 2 keys/s, 3 objects/s recovering
Jan 31 08:00:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:00:19 compute-0 ceph-mon[75294]: osdmap e55: 3 total, 3 up, 3 in
Jan 31 08:00:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 08:00:19 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 08:00:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 56 pg[6.4( v 46'6 lc 46'1 (0'0,46'6] local-lis/les=55/56 n=4 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 56 pg[6.c( v 46'2 lc 46'1 (0'0,46'2] local-lis/les=55/56 n=1 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=55) [1] r=0 lpr=55 pi=[43,55)/1 crt=46'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:20 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 31 08:00:20 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 31 08:00:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:20 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 08:00:20 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 08:00:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v152: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 199 B/s, 2 keys/s, 3 objects/s recovering
Jan 31 08:00:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 08:00:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:00:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 08:00:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:00:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 08:00:20 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 08:00:20 compute-0 ceph-mon[75294]: 5.1c scrub starts
Jan 31 08:00:20 compute-0 ceph-mon[75294]: 5.1c scrub ok
Jan 31 08:00:20 compute-0 ceph-mon[75294]: osdmap e56: 3 total, 3 up, 3 in
Jan 31 08:00:20 compute-0 ceph-mon[75294]: 4.16 scrub starts
Jan 31 08:00:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:00:20 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 12 completed events
Jan 31 08:00:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:00:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:21 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 31 08:00:21 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 31 08:00:22 compute-0 ceph-mon[75294]: 4.16 scrub ok
Jan 31 08:00:22 compute-0 ceph-mon[75294]: 3.13 scrub starts
Jan 31 08:00:22 compute-0 ceph-mon[75294]: 3.13 scrub ok
Jan 31 08:00:22 compute-0 ceph-mon[75294]: pgmap v152: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 199 B/s, 2 keys/s, 3 objects/s recovering
Jan 31 08:00:22 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:00:22 compute-0 ceph-mon[75294]: osdmap e57: 3 total, 3 up, 3 in
Jan 31 08:00:22 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:00:22 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 57 pg[6.5( v 46'3 (0'0,46'3] local-lis/les=50/51 n=2 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.696278572s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=46'3 active pruub 124.873023987s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:22 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 57 pg[6.5( v 46'3 (0'0,46'3] local-lis/les=50/51 n=2 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=57 pruub=8.695947647s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=46'3 unknown NOTIFY pruub 124.873023987s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:22 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 57 pg[6.d( v 46'3 (0'0,46'3] local-lis/les=50/51 n=2 ec=43/26 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=8.696102142s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=46'3 active pruub 124.873580933s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:22 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 57 pg[6.d( v 46'3 (0'0,46'3] local-lis/les=50/51 n=2 ec=43/26 lis/c=50/50 les/c/f=51/52/0 sis=57 pruub=8.696041107s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=46'3 unknown NOTIFY pruub 124.873580933s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:22 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:22 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:22 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 08:00:22 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 08:00:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v154: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 322 B/s, 0 objects/s recovering
Jan 31 08:00:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 08:00:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:00:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 08:00:23 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:00:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 08:00:23 compute-0 ceph-mon[75294]: 2.1a scrub starts
Jan 31 08:00:23 compute-0 ceph-mon[75294]: 2.1a scrub ok
Jan 31 08:00:23 compute-0 ceph-mon[75294]: 3.14 scrub starts
Jan 31 08:00:23 compute-0 ceph-mon[75294]: 3.14 scrub ok
Jan 31 08:00:23 compute-0 ceph-mon[75294]: pgmap v154: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 322 B/s, 0 objects/s recovering
Jan 31 08:00:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:00:23 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 08:00:23 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 58 pg[6.5( v 46'3 lc 46'1 (0'0,46'3] local-lis/les=57/58 n=2 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=46'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:23 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 58 pg[6.d( v 46'3 lc 46'1 (0'0,46'3] local-lis/les=57/58 n=2 ec=43/26 lis/c=50/50 les/c/f=51/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=46'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:24 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:00:24 compute-0 ceph-mon[75294]: osdmap e58: 3 total, 3 up, 3 in
Jan 31 08:00:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v156: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 379 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 08:00:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:25 compute-0 ceph-mon[75294]: pgmap v156: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 379 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 08:00:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 31 08:00:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 31 08:00:26 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 31 08:00:26 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 31 08:00:26 compute-0 ceph-mon[75294]: 7.17 scrub starts
Jan 31 08:00:26 compute-0 ceph-mon[75294]: 7.17 scrub ok
Jan 31 08:00:26 compute-0 ceph-mon[75294]: 4.15 scrub starts
Jan 31 08:00:26 compute-0 ceph-mon[75294]: 4.15 scrub ok
Jan 31 08:00:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v157: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 326 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:00:27 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 08:00:27 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 08:00:27 compute-0 ceph-mon[75294]: pgmap v157: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 326 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:00:27 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 31 08:00:27 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 31 08:00:28 compute-0 ceph-mon[75294]: 4.c scrub starts
Jan 31 08:00:28 compute-0 ceph-mon[75294]: 4.c scrub ok
Jan 31 08:00:28 compute-0 ceph-mon[75294]: 7.16 scrub starts
Jan 31 08:00:28 compute-0 ceph-mon[75294]: 7.16 scrub ok
Jan 31 08:00:28 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 31 08:00:28 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 31 08:00:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v158: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 303 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:00:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 08:00:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:00:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 08:00:29 compute-0 ceph-mon[75294]: 3.10 scrub starts
Jan 31 08:00:29 compute-0 ceph-mon[75294]: 3.10 scrub ok
Jan 31 08:00:29 compute-0 ceph-mon[75294]: pgmap v158: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 303 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:00:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:00:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:00:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 08:00:29 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 08:00:29 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 31 08:00:29 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 31 08:00:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:00:30 compute-0 ceph-mon[75294]: osdmap e59: 3 total, 3 up, 3 in
Jan 31 08:00:30 compute-0 ceph-mon[75294]: 7.14 scrub starts
Jan 31 08:00:30 compute-0 ceph-mon[75294]: 7.14 scrub ok
Jan 31 08:00:30 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 08:00:30 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 08:00:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v160: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:00:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 08:00:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:00:31 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 31 08:00:31 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 31 08:00:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 08:00:31 compute-0 ceph-mon[75294]: 7.b scrub starts
Jan 31 08:00:31 compute-0 ceph-mon[75294]: 7.b scrub ok
Jan 31 08:00:31 compute-0 ceph-mon[75294]: pgmap v160: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:00:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:00:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:00:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 08:00:31 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 08:00:31 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 08:00:31 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 08:00:31 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 08:00:31 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 08:00:32 compute-0 ceph-mon[75294]: 4.3 scrub starts
Jan 31 08:00:32 compute-0 ceph-mon[75294]: 4.3 scrub ok
Jan 31 08:00:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:00:32 compute-0 ceph-mon[75294]: osdmap e60: 3 total, 3 up, 3 in
Jan 31 08:00:32 compute-0 ceph-mon[75294]: 3.d scrub starts
Jan 31 08:00:32 compute-0 ceph-mon[75294]: 3.d scrub ok
Jan 31 08:00:32 compute-0 ceph-mon[75294]: 5.1f scrub starts
Jan 31 08:00:32 compute-0 ceph-mon[75294]: 5.1f scrub ok
Jan 31 08:00:32 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 60 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=60 pruub=12.493723869s) [2] r=-1 lpr=60 pi=[43,60)/1 crt=0'0 active pruub 143.179977417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:32 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 60 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=60 pruub=12.493689537s) [2] r=-1 lpr=60 pi=[43,60)/1 crt=0'0 unknown NOTIFY pruub 143.179977417s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:32 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 60 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=60) [2] r=0 lpr=60 pi=[43,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v162: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 08:00:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 08:00:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:00:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 08:00:33 compute-0 ceph-mon[75294]: pgmap v162: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 08:00:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:00:33 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 31 08:00:33 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 31 08:00:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:00:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 08:00:33 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 08:00:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 61 pg[6.9( empty local-lis/les=50/51 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=61 pruub=12.944042206s) [0] r=-1 lpr=61 pi=[50,61)/1 crt=0'0 active pruub 140.871444702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 61 pg[6.9( empty local-lis/les=50/51 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=61 pruub=12.943993568s) [0] r=-1 lpr=61 pi=[50,61)/1 crt=0'0 unknown NOTIFY pruub 140.871444702s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 61 pg[6.8( empty local-lis/les=60/61 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=60) [2] r=0 lpr=60 pi=[43,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 61 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:34 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 08:00:34 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 08:00:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 08:00:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v164: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:34 compute-0 ceph-mon[75294]: 3.2 scrub starts
Jan 31 08:00:34 compute-0 ceph-mon[75294]: 3.2 scrub ok
Jan 31 08:00:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:00:34 compute-0 ceph-mon[75294]: osdmap e61: 3 total, 3 up, 3 in
Jan 31 08:00:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 08:00:35 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 08:00:35 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 62 pg[6.9( empty local-lis/les=61/62 n=0 ec=43/26 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:35 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 31 08:00:35 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 31 08:00:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 31 08:00:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 31 08:00:36 compute-0 ceph-mon[75294]: 5.10 scrub starts
Jan 31 08:00:36 compute-0 ceph-mon[75294]: 5.10 scrub ok
Jan 31 08:00:36 compute-0 ceph-mon[75294]: pgmap v164: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:36 compute-0 ceph-mon[75294]: osdmap e62: 3 total, 3 up, 3 in
Jan 31 08:00:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v166: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:37 compute-0 ceph-mon[75294]: 4.19 scrub starts
Jan 31 08:00:37 compute-0 ceph-mon[75294]: 4.19 scrub ok
Jan 31 08:00:37 compute-0 ceph-mon[75294]: 2.14 scrub starts
Jan 31 08:00:37 compute-0 ceph-mon[75294]: 2.14 scrub ok
Jan 31 08:00:37 compute-0 ceph-mon[75294]: pgmap v166: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:37 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 31 08:00:37 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 31 08:00:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 31 08:00:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 31 08:00:38 compute-0 ceph-mon[75294]: 4.6 scrub starts
Jan 31 08:00:38 compute-0 ceph-mon[75294]: 4.6 scrub ok
Jan 31 08:00:38 compute-0 ceph-mon[75294]: 3.0 scrub starts
Jan 31 08:00:38 compute-0 ceph-mon[75294]: 3.0 scrub ok
Jan 31 08:00:38 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 31 08:00:38 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 31 08:00:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v167: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 08:00:38 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:00:39 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 31 08:00:39 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 31 08:00:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 08:00:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:00:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 08:00:39 compute-0 ceph-mon[75294]: 7.0 scrub starts
Jan 31 08:00:39 compute-0 ceph-mon[75294]: 7.0 scrub ok
Jan 31 08:00:39 compute-0 ceph-mon[75294]: pgmap v167: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:39 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:00:39 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 08:00:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:40 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 63 pg[6.a( v 46'1 (0'0,46'1] local-lis/les=52/53 n=0 ec=43/26 lis/c=52/52 les/c/f=53/53/0 sis=63 pruub=8.603981972s) [0] r=-1 lpr=63 pi=[52,63)/1 crt=46'1 lcod 0'0 active pruub 142.910156250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:40 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 63 pg[6.a( v 46'1 (0'0,46'1] local-lis/les=52/53 n=0 ec=43/26 lis/c=52/52 les/c/f=53/53/0 sis=63 pruub=8.603896141s) [0] r=-1 lpr=63 pi=[52,63)/1 crt=46'1 lcod 0'0 unknown NOTIFY pruub 142.910156250s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:40 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 63 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=52/52 les/c/f=53/53/0 sis=63) [0] r=0 lpr=63 pi=[52,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:40 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 31 08:00:40 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 31 08:00:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 08:00:40 compute-0 ceph-mon[75294]: 4.1d scrub starts
Jan 31 08:00:40 compute-0 ceph-mon[75294]: 4.1d scrub ok
Jan 31 08:00:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:00:40 compute-0 ceph-mon[75294]: osdmap e63: 3 total, 3 up, 3 in
Jan 31 08:00:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v169: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 08:00:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:00:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 08:00:40 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 08:00:41 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 64 pg[6.a( v 46'1 (0'0,46'1] local-lis/les=63/64 n=0 ec=43/26 lis/c=52/52 les/c/f=53/53/0 sis=63) [0] r=0 lpr=63 pi=[52,63)/1 crt=46'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:41 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 31 08:00:41 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 31 08:00:41 compute-0 ceph-mon[75294]: 4.0 scrub starts
Jan 31 08:00:41 compute-0 ceph-mon[75294]: 4.0 scrub ok
Jan 31 08:00:41 compute-0 ceph-mon[75294]: pgmap v169: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:41 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:00:41 compute-0 ceph-mon[75294]: osdmap e64: 3 total, 3 up, 3 in
Jan 31 08:00:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 08:00:41 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:00:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 08:00:41 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 08:00:41 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 65 pg[6.b( v 46'3 (0'0,46'3] local-lis/les=54/55 n=1 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=8.897294044s) [1] r=-1 lpr=65 pi=[54,65)/1 crt=46'3 active pruub 148.810470581s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:41 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 65 pg[6.b( v 46'3 (0'0,46'3] local-lis/les=54/55 n=1 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=8.897246361s) [1] r=-1 lpr=65 pi=[54,65)/1 crt=46'3 unknown NOTIFY pruub 148.810470581s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:41 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 65 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=65) [1] r=0 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:42 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 08:00:42 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 08:00:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v172: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:42 compute-0 ceph-mon[75294]: 3.4 scrub starts
Jan 31 08:00:42 compute-0 ceph-mon[75294]: 3.4 scrub ok
Jan 31 08:00:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:00:42 compute-0 ceph-mon[75294]: osdmap e65: 3 total, 3 up, 3 in
Jan 31 08:00:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 08:00:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 08:00:43 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 08:00:43 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 31 08:00:43 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 66 pg[6.b( v 46'3 lc 0'0 (0'0,46'3] local-lis/les=65/66 n=1 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=65) [1] r=0 lpr=65 pi=[54,65)/1 crt=46'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:43 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 31 08:00:43 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 08:00:43 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 08:00:44 compute-0 ceph-mon[75294]: 7.1b scrub starts
Jan 31 08:00:44 compute-0 ceph-mon[75294]: 7.1b scrub ok
Jan 31 08:00:44 compute-0 ceph-mon[75294]: pgmap v172: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:44 compute-0 ceph-mon[75294]: osdmap e66: 3 total, 3 up, 3 in
Jan 31 08:00:44 compute-0 ceph-mon[75294]: 3.b scrub starts
Jan 31 08:00:44 compute-0 ceph-mon[75294]: 3.b scrub ok
Jan 31 08:00:44 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 31 08:00:44 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 31 08:00:44 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 31 08:00:44 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 31 08:00:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v174: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:45 compute-0 ceph-mon[75294]: 2.13 scrub starts
Jan 31 08:00:45 compute-0 ceph-mon[75294]: 2.13 scrub ok
Jan 31 08:00:45 compute-0 ceph-mon[75294]: 7.7 scrub starts
Jan 31 08:00:45 compute-0 ceph-mon[75294]: 7.7 scrub ok
Jan 31 08:00:45 compute-0 ceph-mon[75294]: pgmap v174: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:46 compute-0 ceph-mon[75294]: 2.12 scrub starts
Jan 31 08:00:46 compute-0 ceph-mon[75294]: 2.12 scrub ok
Jan 31 08:00:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v175: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:47 compute-0 ceph-mon[75294]: pgmap v175: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:48 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 31 08:00:48 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 31 08:00:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v176: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:00:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 08:00:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:00:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 08:00:49 compute-0 ceph-mon[75294]: 7.d scrub starts
Jan 31 08:00:49 compute-0 ceph-mon[75294]: 7.d scrub ok
Jan 31 08:00:49 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:00:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:00:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 08:00:49 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 08:00:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:50 compute-0 ceph-mon[75294]: pgmap v176: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:00:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:00:50 compute-0 ceph-mon[75294]: osdmap e67: 3 total, 3 up, 3 in
Jan 31 08:00:50 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 31 08:00:50 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 31 08:00:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:00:50
Jan 31 08:00:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:00:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:00:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Jan 31 08:00:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:00:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v178: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:00:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 08:00:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:00:51 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 31 08:00:51 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 31 08:00:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 08:00:51 compute-0 ceph-mon[75294]: 2.10 scrub starts
Jan 31 08:00:51 compute-0 ceph-mon[75294]: 2.10 scrub ok
Jan 31 08:00:51 compute-0 ceph-mon[75294]: pgmap v178: 181 pgs: 181 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:00:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:00:51 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:00:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 08:00:51 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 08:00:52 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 31 08:00:52 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 31 08:00:52 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 68 pg[6.d( v 46'3 (0'0,46'3] local-lis/les=57/58 n=2 ec=43/26 lis/c=57/57 les/c/f=58/58/0 sis=68 pruub=10.796875000s) [1] r=-1 lpr=68 pi=[57,68)/1 crt=46'3 active pruub 161.117630005s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:00:52 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 68 pg[6.d( v 46'3 (0'0,46'3] local-lis/les=57/58 n=2 ec=43/26 lis/c=57/57 les/c/f=58/58/0 sis=68 pruub=10.796802521s) [1] r=-1 lpr=68 pi=[57,68)/1 crt=46'3 unknown NOTIFY pruub 161.117630005s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:00:52 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 68 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=57/57 les/c/f=58/58/0 sis=68) [1] r=0 lpr=68 pi=[57,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:00:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v180: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:00:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 08:00:52 compute-0 ceph-mon[75294]: 3.12 scrub starts
Jan 31 08:00:52 compute-0 ceph-mon[75294]: 3.12 scrub ok
Jan 31 08:00:52 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:00:52 compute-0 ceph-mon[75294]: osdmap e68: 3 total, 3 up, 3 in
Jan 31 08:00:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 08:00:53 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 08:00:53 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 69 pg[6.d( v 46'3 lc 46'1 (0'0,46'3] local-lis/les=68/69 n=2 ec=43/26 lis/c=57/57 les/c/f=58/58/0 sis=68) [1] r=0 lpr=68 pi=[57,68)/1 crt=46'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:00:53 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 08:00:53 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 08:00:54 compute-0 ceph-mon[75294]: 7.13 scrub starts
Jan 31 08:00:54 compute-0 ceph-mon[75294]: 7.13 scrub ok
Jan 31 08:00:54 compute-0 ceph-mon[75294]: pgmap v180: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:00:54 compute-0 ceph-mon[75294]: osdmap e69: 3 total, 3 up, 3 in
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:00:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v182: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:55 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 31 08:00:55 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 31 08:00:55 compute-0 ceph-mon[75294]: 5.17 scrub starts
Jan 31 08:00:55 compute-0 ceph-mon[75294]: 5.17 scrub ok
Jan 31 08:00:55 compute-0 ceph-mon[75294]: pgmap v182: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:00:55 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 31 08:00:55 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 31 08:00:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 31 08:00:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 31 08:00:56 compute-0 ceph-mon[75294]: 3.15 scrub starts
Jan 31 08:00:56 compute-0 ceph-mon[75294]: 3.15 scrub ok
Jan 31 08:00:56 compute-0 ceph-mon[75294]: 5.8 scrub starts
Jan 31 08:00:56 compute-0 ceph-mon[75294]: 5.8 scrub ok
Jan 31 08:00:56 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 31 08:00:56 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 31 08:00:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v183: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:57 compute-0 ceph-mon[75294]: 2.16 scrub starts
Jan 31 08:00:57 compute-0 ceph-mon[75294]: 2.16 scrub ok
Jan 31 08:00:57 compute-0 ceph-mon[75294]: 7.1e scrub starts
Jan 31 08:00:57 compute-0 ceph-mon[75294]: 7.1e scrub ok
Jan 31 08:00:57 compute-0 ceph-mon[75294]: pgmap v183: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:00:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v184: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 31 08:00:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 08:00:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:00:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 08:00:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:00:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 08:00:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:00:58 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 08:00:59 compute-0 ceph-mon[75294]: pgmap v184: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 31 08:00:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:00:59 compute-0 ceph-mon[75294]: osdmap e70: 3 total, 3 up, 3 in
Jan 31 08:01:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:00 compute-0 sudo[99076]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxtlflraetoeruvjfdwxdpgqzulbhqo ; /usr/bin/python3'
Jan 31 08:01:00 compute-0 sudo[99076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 08:01:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 08:01:00 compute-0 python3[99078]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:00 compute-0 podman[99079]: 2026-01-31 08:01:00.748131025 +0000 UTC m=+0.051023370 container create 542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1 (image=quay.io/ceph/ceph:v20, name=compassionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:01:00 compute-0 systemd[1]: Started libpod-conmon-542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1.scope.
Jan 31 08:01:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v186: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 31 08:01:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 08:01:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:01:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0cae0d1e46dfd48498d122ab40809e88e0e7ab35655a08dfbf95e820e76fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0cae0d1e46dfd48498d122ab40809e88e0e7ab35655a08dfbf95e820e76fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:00 compute-0 podman[99079]: 2026-01-31 08:01:00.720387137 +0000 UTC m=+0.023279572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:01:00 compute-0 podman[99079]: 2026-01-31 08:01:00.82082801 +0000 UTC m=+0.123720355 container init 542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1 (image=quay.io/ceph/ceph:v20, name=compassionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:00 compute-0 podman[99079]: 2026-01-31 08:01:00.825843792 +0000 UTC m=+0.128736137 container start 542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1 (image=quay.io/ceph/ceph:v20, name=compassionate_pasteur, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:01:00 compute-0 podman[99079]: 2026-01-31 08:01:00.831847334 +0000 UTC m=+0.134739679 container attach 542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1 (image=quay.io/ceph/ceph:v20, name=compassionate_pasteur, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:01:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 08:01:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:01:00 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:01:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 08:01:01 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 08:01:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 71 pg[6.f( v 46'5 (0'0,46'5] local-lis/les=54/55 n=3 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=71 pruub=13.813378334s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=46'5 active pruub 172.811096191s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:01 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 71 pg[6.f( v 46'5 (0'0,46'5] local-lis/les=54/55 n=3 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=71 pruub=13.813326836s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=46'5 unknown NOTIFY pruub 172.811096191s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:01 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 71 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:01 compute-0 compassionate_pasteur[99094]: could not fetch user info: no user info saved
Jan 31 08:01:01 compute-0 systemd[1]: libpod-542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1.scope: Deactivated successfully.
Jan 31 08:01:01 compute-0 podman[99079]: 2026-01-31 08:01:01.111448466 +0000 UTC m=+0.414340811 container died 542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1 (image=quay.io/ceph/ceph:v20, name=compassionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:01:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a0cae0d1e46dfd48498d122ab40809e88e0e7ab35655a08dfbf95e820e76fc-merged.mount: Deactivated successfully.
Jan 31 08:01:01 compute-0 podman[99079]: 2026-01-31 08:01:01.169707622 +0000 UTC m=+0.472599967 container remove 542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1 (image=quay.io/ceph/ceph:v20, name=compassionate_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:01:01 compute-0 systemd[1]: libpod-conmon-542716d690030bc27e43efe7f627075eac2fd41ce8fbee6360eee953fdff89b1.scope: Deactivated successfully.
Jan 31 08:01:01 compute-0 sudo[99076]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 08:01:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 08:01:01 compute-0 sudo[99215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzfiiipvmzzfzupauhdngntnbkxwbtqd ; /usr/bin/python3'
Jan 31 08:01:01 compute-0 sudo[99215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:01 compute-0 python3[99217]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid dc03f344-536f-5591-add9-31059f42637c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:01 compute-0 podman[99218]: 2026-01-31 08:01:01.491671389 +0000 UTC m=+0.031857716 container create c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0 (image=quay.io/ceph/ceph:v20, name=elegant_roentgen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:01:01 compute-0 systemd[1]: Started libpod-conmon-c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0.scope.
Jan 31 08:01:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a764fd54cc2350973b6295f1778d65ecb7fd7c60b5d72b0b57c07df443ae01a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a764fd54cc2350973b6295f1778d65ecb7fd7c60b5d72b0b57c07df443ae01a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:01 compute-0 podman[99218]: 2026-01-31 08:01:01.559779095 +0000 UTC m=+0.099965442 container init c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0 (image=quay.io/ceph/ceph:v20, name=elegant_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:01:01 compute-0 podman[99218]: 2026-01-31 08:01:01.563796689 +0000 UTC m=+0.103983016 container start c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0 (image=quay.io/ceph/ceph:v20, name=elegant_roentgen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:01:01 compute-0 podman[99218]: 2026-01-31 08:01:01.569369566 +0000 UTC m=+0.109555903 container attach c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0 (image=quay.io/ceph/ceph:v20, name=elegant_roentgen, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:01:01 compute-0 podman[99218]: 2026-01-31 08:01:01.476984622 +0000 UTC m=+0.017170969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:01:01 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 31 08:01:01 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 31 08:01:01 compute-0 CROND[99317]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 08:01:01 compute-0 run-parts[99320]: (/etc/cron.hourly) starting 0anacron
Jan 31 08:01:01 compute-0 anacron[99328]: Anacron started on 2026-01-31
Jan 31 08:01:01 compute-0 anacron[99328]: Will run job `cron.daily' in 25 min.
Jan 31 08:01:01 compute-0 anacron[99328]: Will run job `cron.weekly' in 45 min.
Jan 31 08:01:01 compute-0 anacron[99328]: Will run job `cron.monthly' in 65 min.
Jan 31 08:01:01 compute-0 anacron[99328]: Jobs will be executed sequentially
Jan 31 08:01:01 compute-0 run-parts[99330]: (/etc/cron.hourly) finished 0anacron
Jan 31 08:01:01 compute-0 CROND[99316]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]: {
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "user_id": "openstack",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "display_name": "openstack",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "email": "",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "suspended": 0,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "max_buckets": 1000,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "subusers": [],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "keys": [
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         {
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:             "user": "openstack",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:             "access_key": "X0AW7KCL7HU5XVLK8NY2",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:             "secret_key": "9etcxStYP5zAetxtIEfIqR5pKlo2syGVTxEqCdRC",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:             "active": true,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:             "create_date": "2026-01-31T08:01:01.810703Z"
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         }
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     ],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "swift_keys": [],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "caps": [],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "op_mask": "read, write, delete",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "default_placement": "",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "default_storage_class": "",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "placement_tags": [],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "bucket_quota": {
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "enabled": false,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "check_on_raw": false,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "max_size": -1,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "max_size_kb": 0,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "max_objects": -1
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     },
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "user_quota": {
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "enabled": false,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "check_on_raw": false,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "max_size": -1,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "max_size_kb": 0,
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:         "max_objects": -1
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     },
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "temp_url_keys": [],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "type": "rgw",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "mfa_ids": [],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "account_id": "",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "path": "/",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "create_date": "2026-01-31T08:01:01.810377Z",
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "tags": [],
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]:     "group_ids": []
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]: }
Jan 31 08:01:01 compute-0 elegant_roentgen[99233]: 
Jan 31 08:01:01 compute-0 systemd[1]: libpod-c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0.scope: Deactivated successfully.
Jan 31 08:01:01 compute-0 podman[99218]: 2026-01-31 08:01:01.845201213 +0000 UTC m=+0.385387540 container died c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0 (image=quay.io/ceph/ceph:v20, name=elegant_roentgen, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:01:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a764fd54cc2350973b6295f1778d65ecb7fd7c60b5d72b0b57c07df443ae01a9-merged.mount: Deactivated successfully.
Jan 31 08:01:01 compute-0 podman[99218]: 2026-01-31 08:01:01.90844221 +0000 UTC m=+0.448628537 container remove c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0 (image=quay.io/ceph/ceph:v20, name=elegant_roentgen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:01:01 compute-0 systemd[1]: libpod-conmon-c20c67af7d22f173374934c761a251f7272c20fd2f9750432bbeba5ca8a4fff0.scope: Deactivated successfully.
Jan 31 08:01:01 compute-0 sudo[99215]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:01 compute-0 ceph-mon[75294]: 2.e scrub starts
Jan 31 08:01:01 compute-0 ceph-mon[75294]: 2.e scrub ok
Jan 31 08:01:01 compute-0 ceph-mon[75294]: pgmap v186: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 0 objects/s recovering
Jan 31 08:01:01 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:01:01 compute-0 ceph-mon[75294]: osdmap e71: 3 total, 3 up, 3 in
Jan 31 08:01:01 compute-0 ceph-mon[75294]: 3.1a scrub starts
Jan 31 08:01:01 compute-0 ceph-mon[75294]: 3.1a scrub ok
Jan 31 08:01:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 08:01:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 08:01:02 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 08:01:02 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 72 pg[6.f( v 46'5 lc 46'1 (0'0,46'5] local-lis/les=71/72 n=3 ec=43/26 lis/c=54/54 les/c/f=55/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=46'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.6657629383800317e-06 of space, bias 4.0, pg target 0.001998915526056038 quantized to 16 (current 16)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:01:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:01:02 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:02 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 31 08:01:02 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 31 08:01:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v189: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 341 B/s wr, 47 op/s; 11 B/s, 0 objects/s recovering
Jan 31 08:01:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 08:01:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 08:01:03 compute-0 ceph-mon[75294]: 5.15 scrub starts
Jan 31 08:01:03 compute-0 ceph-mon[75294]: 5.15 scrub ok
Jan 31 08:01:03 compute-0 ceph-mon[75294]: osdmap e72: 3 total, 3 up, 3 in
Jan 31 08:01:03 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:03 compute-0 ceph-mon[75294]: pgmap v189: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 341 B/s wr, 47 op/s; 11 B/s, 0 objects/s recovering
Jan 31 08:01:03 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 08:01:03 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev 38f8c1cd-6d09-4447-82f2-7fd1f35272fe (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 08:01:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:01:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 08:01:04 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 08:01:04 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 08:01:04 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev a4d81496-8f22-4737-8c18-f8e975311d6a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 08:01:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:01:04 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:04 compute-0 ceph-mon[75294]: 5.a scrub starts
Jan 31 08:01:04 compute-0 ceph-mon[75294]: 5.a scrub ok
Jan 31 08:01:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:04 compute-0 ceph-mon[75294]: osdmap e73: 3 total, 3 up, 3 in
Jan 31 08:01:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v192: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 511 B/s wr, 71 op/s; 153 B/s, 0 objects/s recovering
Jan 31 08:01:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:01:04 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:01:04 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 08:01:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 08:01:05 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 08:01:05 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev ade5f7d7-f1f4-42e7-ba63-3a9b57cc50b1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 08:01:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:01:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:05 compute-0 ceph-mon[75294]: osdmap e74: 3 total, 3 up, 3 in
Jan 31 08:01:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:05 compute-0 ceph-mon[75294]: pgmap v192: 181 pgs: 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 511 B/s wr, 71 op/s; 153 B/s, 0 objects/s recovering
Jan 31 08:01:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:05 compute-0 ceph-mon[75294]: osdmap e75: 3 total, 3 up, 3 in
Jan 31 08:01:05 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 08:01:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 75 pg[8.0( v 41'6 (0'0,41'6] local-lis/les=40/41 n=6 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=75 pruub=8.014966011s) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 41'5 mlcod 41'5 active pruub 167.394760132s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 75 pg[9.0( v 71'551 (0'0,71'551] local-lis/les=43/44 n=210 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75 pruub=11.986753464s) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 71'550 mlcod 71'550 active pruub 171.366806030s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 75 pg[8.0( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=75 pruub=8.014966011s) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 41'5 mlcod 0'0 unknown pruub 167.394760132s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x556c5b076480) split_cache   moving buffer(0x556c5c48e480 space 0x556c5b85dd40 0x0~424 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x556c5b076480) split_cache   moving buffer(0x556c5b77f080 space 0x556c5b8c3740 0x0~1b4 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x556c5b076480) split_cache   moving buffer(0x556c5b77f700 space 0x556c5b897440 0x0~2e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x556c5b076480) split_cache   moving buffer(0x556c5c4a3e00 space 0x556c5b807440 0x0~2e clean)
Jan 31 08:01:05 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 08:01:05 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 75 pg[9.0( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75 pruub=11.986753464s) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 71'550 mlcod 0'0 unknown pruub 171.366806030s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4b3e80 space 0x556c5dbc1740 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4b3b80 space 0x556c5dbc3a40 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498f80 space 0x556c5ca27d40 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c492a80 space 0x556c5dbea540 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c48e780 space 0x556c5dbd8240 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c9b3600 space 0x556c5dbdf440 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c47f180 space 0x556c5b84d440 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c48fb80 space 0x556c5db42840 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498700 space 0x556c5c549740 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3f80 space 0x556c5db21d40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a2e80 space 0x556c5db38e40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c48e800 space 0x556c5d6e3440 0x0~1c clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c47f680 space 0x556c5db1c840 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c9b3f00 space 0x556c5db9eb40 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3500 space 0x556c5db38540 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5b779a80 space 0x556c5db48840 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498100 space 0x556c5dbace40 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c3bf500 space 0x556c5c9ef440 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4b2500 space 0x556c5db49a40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49d500 space 0x556c5db43140 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c48e100 space 0x556c5c547740 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49cf00 space 0x556c5db33d40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49c280 space 0x556c5db33440 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c499980 space 0x556c5c546e40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c448600 space 0x556c5b84dd40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498a80 space 0x556c5a27ee40 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498900 space 0x556c5c548e40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c363d00 space 0x556c5c9ee240 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5beb5300 space 0x556c5db73a40 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c492900 space 0x556c5db9f740 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c48e280 space 0x556c5db8ee40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c499500 space 0x556c5db9e240 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49dc80 space 0x556c5db39740 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3a80 space 0x556c5c9eb140 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3780 space 0x556c5dbad740 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3480 space 0x556c5db20b40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c363c00 space 0x556c5ca7b740 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498500 space 0x556c5db32240 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c3bf700 space 0x556c5db8fa40 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3880 space 0x556c5dbc0b40 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c363700 space 0x556c5dbeae40 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3600 space 0x556c5db1da40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498300 space 0x556c5db32b40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4b3200 space 0x556c5c9eba40 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498b00 space 0x556c5c548540 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c345200 space 0x556c5dbd9740 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c3bf300 space 0x556c5dbde240 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498280 space 0x556c5db3d440 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c499e00 space 0x556c5c546540 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49dd00 space 0x556c5c9efa40 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49c080 space 0x556c5db72840 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4b3f80 space 0x556c5c9ec540 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49d700 space 0x556c5db43a40 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c498580 space 0x556c5dbd8e40 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3800 space 0x556c5db1d140 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4b3880 space 0x556c5dbcd740 0x0~9a clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c49de80 space 0x556c5db73140 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4a3e80 space 0x556c5db20240 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c9b3800 space 0x556c5dbc2540 0x0~98 clean)
Jan 31 08:01:05 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x556c5bd906c0) split_cache   moving buffer(0x556c5c4b3080 space 0x556c5db49140 0x0~6e clean)
Jan 31 08:01:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:05 compute-0 ceph-mgr[75591]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Jan 31 08:01:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 08:01:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 08:01:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.15( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.14( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.16( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.17( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.15( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.17( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.16( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.14( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.11( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.11( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.10( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.2( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.10( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.2( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.3( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.c( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.3( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.d( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.c( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.e( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.d( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.8( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.9( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.f( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.f( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.e( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.b( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.a( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.9( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.8( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1( v 41'6 (0'0,41'6] local-lis/les=40/41 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.6( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.7( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.6( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.7( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.5( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.4( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.4( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.5( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] update: starting ev a46e17ea-c25d-4d6d-b4cc-b8b2f4c3b27a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev 38f8c1cd-6d09-4447-82f2-7fd1f35272fe (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event 38f8c1cd-6d09-4447-82f2-7fd1f35272fe (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev a4d81496-8f22-4737-8c18-f8e975311d6a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event a4d81496-8f22-4737-8c18-f8e975311d6a (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev ade5f7d7-f1f4-42e7-ba63-3a9b57cc50b1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event ade5f7d7-f1f4-42e7-ba63-3a9b57cc50b1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] complete: finished ev a46e17ea-c25d-4d6d-b4cc-b8b2f4c3b27a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event a46e17ea-c25d-4d6d-b4cc-b8b2f4c3b27a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1b( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1a( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.19( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.18( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.18( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1e( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.19( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1f( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1f( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1c( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1e( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1d( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1d( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.13( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1c( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.12( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.a( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.b( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.13( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.12( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1a( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1b( v 71'551 lc 0'0 (0'0,71'551] local-lis/les=43/44 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.14( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:01:06 compute-0 ceph-mon[75294]: 5.14 scrub starts
Jan 31 08:01:06 compute-0 ceph-mon[75294]: 5.14 scrub ok
Jan 31 08:01:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:01:06 compute-0 ceph-mon[75294]: osdmap e76: 3 total, 3 up, 3 in
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.16( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.15( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.11( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.17( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.2( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.10( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.14( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.3( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.2( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.10( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.d( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.c( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.0( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 41'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.8( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.f( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.b( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.a( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.0( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 71'550 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.9( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.e( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.6( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.7( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.4( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.5( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.4( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1b( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1a( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.19( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.18( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1f( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.5( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1e( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1d( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.a( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.12( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.12( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.13( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1a( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[8.1c( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=40/40 les/c/f=41/41/0 sis=75) [1] r=0 lpr=75 pi=[40,75)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 76 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [1] r=0 lpr=75 pi=[43,75)/1 crt=71'551 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:06 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 31 08:01:06 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 31 08:01:06 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 31 08:01:06 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 31 08:01:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v195: 243 pgs: 62 unknown, 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 153 B/s, 0 objects/s recovering
Jan 31 08:01:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:01:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:01:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 08:01:07 compute-0 ceph-mon[75294]: 7.19 scrub starts
Jan 31 08:01:07 compute-0 ceph-mon[75294]: 7.19 scrub ok
Jan 31 08:01:07 compute-0 ceph-mon[75294]: pgmap v195: 243 pgs: 62 unknown, 181 active+clean; 461 KiB data, 82 MiB used, 60 GiB / 60 GiB avail; 153 B/s, 0 objects/s recovering
Jan 31 08:01:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:01:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 08:01:07 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 08:01:07 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 77 pg[11.0( v 71'2 (0'0,71'2] local-lis/les=47/48 n=2 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=13.584725380s) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 71'1 mlcod 71'1 active pruub 175.152069092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:07 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 77 pg[11.0( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=13.584725380s) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 71'1 mlcod 0'0 unknown pruub 175.152069092s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:07 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 77 pg[10.0( v 71'66 (0'0,71'66] local-lis/les=45/46 n=9 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=77 pruub=11.485410690s) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 71'65 mlcod 71'65 active pruub 168.628448486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:07 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 77 pg[10.0( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=77 pruub=11.485410690s) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 71'65 mlcod 0'0 unknown pruub 168.628448486s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 31 08:01:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 31 08:01:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 08:01:08 compute-0 ceph-mon[75294]: 2.c scrub starts
Jan 31 08:01:08 compute-0 ceph-mon[75294]: 2.c scrub ok
Jan 31 08:01:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:01:08 compute-0 ceph-mon[75294]: osdmap e77: 3 total, 3 up, 3 in
Jan 31 08:01:08 compute-0 ceph-mon[75294]: 4.14 scrub starts
Jan 31 08:01:08 compute-0 ceph-mon[75294]: 4.14 scrub ok
Jan 31 08:01:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 08:01:08 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.17( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.14( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.16( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.15( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.13( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.12( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=47/48 n=1 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.f( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.e( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.d( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.b( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.3( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.c( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.8( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.a( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.2( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=1 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.4( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.5( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.6( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.7( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.18( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1a( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1b( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1c( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1d( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1f( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1e( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.11( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.9( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.10( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.19( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=47/48 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1e( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1b( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.d( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.b( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.19( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.a( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.13( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.12( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.10( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1f( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.11( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1d( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1c( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.18( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1a( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.7( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.6( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.5( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.4( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.8( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.f( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.c( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.e( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1( v 71'66 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.2( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.3( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.14( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.15( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.9( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.16( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.17( v 71'66 lc 0'0 (0'0,71'66] local-lis/les=45/46 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:08 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.13( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.16( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.0( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 71'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=77/78 n=1 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.c( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.a( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=77/78 n=1 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.5( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.6( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.7( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1d( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.9( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 78 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=47/47 les/c/f=48/48/0 sis=77) [1] r=0 lpr=77 pi=[47,77)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.13( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1b( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.a( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.b( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.12( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.d( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.10( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1f( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.11( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1d( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1e( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.19( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1c( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.18( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.7( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1a( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.6( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.5( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.4( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.8( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.c( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.0( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 71'65 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.1( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.e( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.2( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.f( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.14( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.3( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.15( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.16( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.9( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 78 pg[10.17( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=45/45 les/c/f=46/46/0 sis=77) [2] r=0 lpr=77 pi=[45,77)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 2 peering, 62 unknown, 241 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:09 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 31 08:01:09 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 31 08:01:09 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 08:01:09 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 08:01:09 compute-0 ceph-mon[75294]: osdmap e78: 3 total, 3 up, 3 in
Jan 31 08:01:09 compute-0 ceph-mon[75294]: 5.b scrub starts
Jan 31 08:01:09 compute-0 ceph-mon[75294]: 5.b scrub ok
Jan 31 08:01:09 compute-0 ceph-mon[75294]: pgmap v198: 305 pgs: 2 peering, 62 unknown, 241 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:09 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 31 08:01:09 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 31 08:01:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:10 compute-0 ceph-mon[75294]: 3.9 scrub starts
Jan 31 08:01:10 compute-0 ceph-mon[75294]: 3.9 scrub ok
Jan 31 08:01:10 compute-0 ceph-mon[75294]: 5.11 scrub starts
Jan 31 08:01:10 compute-0 ceph-mon[75294]: 5.11 scrub ok
Jan 31 08:01:10 compute-0 ceph-mon[75294]: 2.0 scrub starts
Jan 31 08:01:10 compute-0 ceph-mon[75294]: 2.0 scrub ok
Jan 31 08:01:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 2 peering, 62 unknown, 241 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:10 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 16 completed events
Jan 31 08:01:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:01:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:12 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 31 08:01:12 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 31 08:01:12 compute-0 ceph-mon[75294]: pgmap v199: 305 pgs: 2 peering, 62 unknown, 241 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:13 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 31 08:01:13 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 31 08:01:13 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 08:01:13 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 08:01:13 compute-0 ceph-mon[75294]: 2.8 scrub starts
Jan 31 08:01:13 compute-0 ceph-mon[75294]: 2.8 scrub ok
Jan 31 08:01:13 compute-0 ceph-mon[75294]: pgmap v200: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:13 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 31 08:01:13 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 31 08:01:14 compute-0 ceph-mon[75294]: 3.a scrub starts
Jan 31 08:01:14 compute-0 ceph-mon[75294]: 3.a scrub ok
Jan 31 08:01:14 compute-0 ceph-mon[75294]: 4.10 scrub starts
Jan 31 08:01:14 compute-0 ceph-mon[75294]: 4.10 scrub ok
Jan 31 08:01:14 compute-0 ceph-mon[75294]: 5.0 scrub starts
Jan 31 08:01:14 compute-0 ceph-mon[75294]: 5.0 scrub ok
Jan 31 08:01:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:01:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:01:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:01:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:01:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 08:01:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:01:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:01:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:01:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 08:01:16 compute-0 ceph-mon[75294]: pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:16 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:01:16 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:01:16 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:01:16 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:01:16 compute-0 ceph-mgr[75591]: [progress INFO root] Completed event 15ec1cf8-e792-4334-ab06-24b2e55da744 (Global Recovery Event) in 10 seconds
Jan 31 08:01:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:01:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:01:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:01:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:01:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 08:01:16 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.19( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.587844849s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.496963501s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.556420326s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.898223877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.556358337s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.898223877s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.946352959s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.288574219s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.946293831s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.288574219s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.15( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.946480751s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.288848877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.15( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.946440697s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.288848877s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.19( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.587692261s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.496963501s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.d( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586981773s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 active pruub 174.496292114s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.d( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586901665s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 unknown NOTIFY pruub 174.496292114s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.1e( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586946487s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.496475220s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.13( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586521149s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.496093750s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581926346s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.924697876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581898689s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.924697876s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.10( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.946343422s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289230347s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945890427s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.288818359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.10( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.946315765s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289230347s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945877075s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.288818359s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.582299232s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.924926758s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.1e( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586925507s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.496475220s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581797600s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.924880981s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581848145s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.924926758s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581759453s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.924880981s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.11( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945836067s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289001465s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.11( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945772171s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289001465s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=77/78 n=1 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581700325s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.924957275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=77/78 n=1 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581685066s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.924957275s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.2( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945580482s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.288925171s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945706367s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289062500s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.2( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945556641s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.288925171s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.14( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945690155s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289062500s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945694923s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289062500s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.14( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945672989s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289062500s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945155144s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.288681030s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945137024s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.288681030s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.582315445s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.925949097s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.582294464s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.925949097s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.13( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586500168s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.496093750s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945590019s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289291382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945565224s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289291382s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.582199097s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926010132s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.c( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945474625s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289276123s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.582185745s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926010132s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.c( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945435524s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289276123s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.d( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945314407s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289199829s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.d( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945292473s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289199829s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.582032204s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.925979614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.582019806s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.925979614s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581959724s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926040649s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.e( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945535660s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289627075s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945281982s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289428711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945265770s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289428711s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581940651s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926040649s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945171356s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289367676s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945136070s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289367676s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581809044s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926086426s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581794739s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926086426s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.12( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586568832s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 active pruub 174.496261597s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.e( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945518494s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289627075s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945120811s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289535522s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.945078850s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289535522s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.f( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944928169s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289443970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.f( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944908142s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289443970s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581506729s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926132202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581461906s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926132202s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.b( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.587004662s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.496231079s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.b( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944762230s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289474487s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.b( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944647789s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289474487s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=77/78 n=1 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581357956s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926223755s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.9( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944653511s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289535522s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581338882s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926239014s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.9( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944636345s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289535522s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581323624s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926239014s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=77/78 n=1 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581340790s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926223755s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944592476s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289642334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944578171s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289642334s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.6( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581213951s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926376343s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.6( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581185341s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926376343s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.4( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944453239s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289672852s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581142426s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926422119s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.11( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586622238s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.496368408s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.12( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586538315s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 unknown NOTIFY pruub 174.496261597s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.b( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586496353s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.496231079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.11( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586604118s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.496368408s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.10( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586400032s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.496307373s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.10( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586381912s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.496307373s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.1a( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586983681s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497024536s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.1a( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586966515s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497024536s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.7( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586805344s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497024536s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.7( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586788177s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497024536s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.6( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586688042s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497039795s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.6( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586673737s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497039795s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.4( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586557388s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497085571s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.8( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586541176s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497100830s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.4( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586533546s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497085571s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.8( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586525917s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497100830s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.f( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586480141s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497222900s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.f( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586465836s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497222900s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.9( v 78'67 (0'0,78'67] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586421967s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 active pruub 174.497314453s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.9( v 78'67 (0'0,78'67] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586400032s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 unknown NOTIFY pruub 174.497314453s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.e( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586107254s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 active pruub 174.497207642s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.e( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.586084366s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 unknown NOTIFY pruub 174.497207642s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.2( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585893631s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497222900s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.2( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585875511s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497222900s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.1( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585812569s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497192383s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.1( v 71'66 (0'0,71'66] local-lis/les=77/78 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585790634s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497192383s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.14( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585757256s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 active pruub 174.497238159s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.14( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585735321s) [1] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 unknown NOTIFY pruub 174.497238159s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.16( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585689545s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497299194s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.16( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585672379s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497299194s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.15( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585518837s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 active pruub 174.497283936s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.15( v 78'67 (0'0,78'67] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585484505s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 71'66 unknown NOTIFY pruub 174.497283936s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.17( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585241318s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 active pruub 174.497329712s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[10.17( v 71'66 (0'0,71'66] local-lis/les=77/78 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.585222244s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 unknown NOTIFY pruub 174.497329712s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.15( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.15( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.12( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.11( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.2( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.d( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.5( v 76'552 (0'0,76'552] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944590569s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 71'551 active pruub 184.289871216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.581104279s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926422119s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1b( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944346428s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289718628s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1b( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944332123s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289718628s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580987930s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926437378s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580975533s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926437378s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580972672s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926483154s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580959320s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926483154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.18( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944183350s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289810181s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.18( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944169044s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289810181s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944155693s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289810181s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944144249s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289810181s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580737114s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926452637s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580726624s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926452637s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944033623s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.289886475s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944022179s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.289886475s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.6( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943772316s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289657593s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.6( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943760872s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289657593s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1f( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943893433s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.289855957s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1f( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943883896s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289855957s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1d( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944038391s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.290039062s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1d( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944027901s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.290039062s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580448151s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926544189s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943943024s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.290054321s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.d( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.b( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943931580s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.290054321s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.3( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580436707s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926544189s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1c( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943883896s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.290100098s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580317497s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926528931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1c( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943872452s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.290100098s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580307007s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926528931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580218315s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926498413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.580207825s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926498413s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.4( v 41'6 (0'0,41'6] local-lis/les=75/76 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944377899s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.289672852s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.8( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.12( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943713188s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.290161133s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.5( v 76'552 (0'0,76'552] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944556236s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 71'551 unknown NOTIFY pruub 184.289871216s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.12( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943696976s) [2] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.290161133s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.9( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.579941750s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926574707s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944687843s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.291320801s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.9( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.579924583s) [2] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926574707s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944668770s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.291320801s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.579904556s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926589966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.579876900s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926589966s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1a( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944662094s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 active pruub 184.291412354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.2( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.579851151s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 active pruub 178.926589966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[8.1a( v 41'6 (0'0,41'6] local-lis/les=75/76 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944649696s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 unknown NOTIFY pruub 184.291412354s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=77/78 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79 pruub=8.579830170s) [0] r=-1 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 unknown NOTIFY pruub 178.926589966s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944582939s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.291366577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.944567680s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.291366577s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943259239s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 active pruub 184.290084839s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79 pruub=13.943184853s) [0] r=-1 lpr=79 pi=[75,79)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 184.290084839s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.19( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.13( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.12( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.11( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.b( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.10( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.18( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.1a( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.6( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.1b( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.2( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 79 pg[10.14( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.1a( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.1b( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.1c( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.1e( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.11( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.1f( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.1c( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.4( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[8.12( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 79 pg[11.9( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.10( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.10( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.11( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.5( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.9( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.b( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.4( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.8( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.14( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.15( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.4( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.6( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.9( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.6( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.9( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.d( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.f( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.e( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.e( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.e( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.f( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.c( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.1( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.1( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.3( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.1e( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.1d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.18( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.1a( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.19( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.1b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[11.17( empty local-lis/les=0/0 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[9.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.14( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.16( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[10.1( empty local-lis/les=0/0 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.1f( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 79 pg[8.1d( empty local-lis/les=0/0 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:16 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 08:01:16 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 08:01:16 compute-0 sshd-session[99348]: Accepted publickey for zuul from 192.168.122.30 port 60856 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:01:16 compute-0 systemd-logind[810]: New session 34 of user zuul.
Jan 31 08:01:16 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 31 08:01:16 compute-0 sshd-session[99348]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:01:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 08:01:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:01:17 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 31 08:01:17 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 31 08:01:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 08:01:17 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:01:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 08:01:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:01:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:01:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:01:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:01:17 compute-0 ceph-mon[75294]: osdmap e79: 3 total, 3 up, 3 in
Jan 31 08:01:17 compute-0 ceph-mon[75294]: 7.f scrub starts
Jan 31 08:01:17 compute-0 ceph-mon[75294]: 7.f scrub ok
Jan 31 08:01:17 compute-0 ceph-mon[75294]: pgmap v203: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.3( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.3( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.1( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.9( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.9( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.5( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.5( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.11( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.11( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[9.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=-1 lpr=80 pi=[75,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.16( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.1f( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.5( v 76'552 (0'0,76'552] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.5( v 76'552 (0'0,76'552] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.14( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.1( v 71'66 (0'0,71'66] local-lis/les=79/80 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.1d( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.18( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.1a( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.1e( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=79/80 n=1 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.e( v 78'67 lc 71'54 (0'0,78'67] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=78'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.c( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.e( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.f( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.d( v 78'67 lc 71'55 (0'0,78'67] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=78'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.17( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.9( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.6( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.6( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=79/80 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.4( v 71'66 (0'0,71'66] local-lis/les=79/80 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.15( v 78'67 lc 71'53 (0'0,78'67] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=78'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.8( v 71'66 (0'0,71'66] local-lis/les=79/80 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.b( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.9( v 78'67 lc 71'58 (0'0,78'67] local-lis/les=79/80 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=78'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[8.10( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [0] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[10.7( v 71'66 (0'0,71'66] local-lis/les=79/80 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 80 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [0] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.13( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.12( v 78'67 lc 49'17 (0'0,78'67] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=78'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.2( v 71'66 (0'0,71'66] local-lis/les=79/80 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.f( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.6( v 71'66 (0'0,71'66] local-lis/les=79/80 n=1 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.b( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.1a( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.19( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.14( v 78'67 lc 71'57 (0'0,78'67] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=78'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.1b( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.11( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.12( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.4( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=79/80 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.1c( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.d( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.15( v 41'6 (0'0,41'6] local-lis/les=79/80 n=0 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=79/80 n=1 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.9( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[8.2( v 41'6 (0'0,41'6] local-lis/les=79/80 n=1 ec=75/40 lis/c=75/75 les/c/f=76/76/0 sis=79) [2] r=0 lpr=79 pi=[75,79)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 80 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=79/80 n=0 ec=77/47 lis/c=77/77 les/c/f=78/78/0 sis=79) [2] r=0 lpr=79 pi=[77,79)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.11( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 80 pg[10.10( v 71'66 (0'0,71'66] local-lis/les=79/80 n=0 ec=77/45 lis/c=77/77 les/c/f=78/78/0 sis=79) [1] r=0 lpr=79 pi=[77,79)/1 crt=71'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:17 compute-0 python3.9[99501]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:01:18 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 08:01:18 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 08:01:18 compute-0 sudo[99568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:18 compute-0 sudo[99568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:18 compute-0 sudo[99568]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:18 compute-0 sudo[99594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:01:18 compute-0 sudo[99594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 08:01:18 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 31 08:01:18 compute-0 ceph-mon[75294]: 7.3 scrub starts
Jan 31 08:01:18 compute-0 ceph-mon[75294]: 7.3 scrub ok
Jan 31 08:01:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:01:18 compute-0 ceph-mon[75294]: osdmap e80: 3 total, 3 up, 3 in
Jan 31 08:01:18 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 08:01:18 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.5( v 76'552 (0'0,76'552] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=76'552 lcod 71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 81 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=80) [0]/[1] async=[0] r=0 lpr=80 pi=[75,80)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:18 compute-0 sudo[99594]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:01:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:01:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:01:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:01:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:01:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:01:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:01:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:01:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 3 active+recovery_wait, 2 active+recovery_wait+degraded, 11 peering, 1 active+recovering, 288 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 2/251 objects degraded (0.797%); 1/251 objects misplaced (0.398%); 250 B/s, 1 objects/s recovering
Jan 31 08:01:18 compute-0 sudo[99812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myfznvidvcmzivniahwfgnmaapcrqhas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846478.4227364-27-52696700290021/AnsiballZ_command.py'
Jan 31 08:01:18 compute-0 sudo[99812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:18 compute-0 sudo[99788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:18 compute-0 sudo[99788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:18 compute-0 sudo[99788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:18 compute-0 sudo[99827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:01:18 compute-0 sudo[99827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:18 compute-0 python3.9[99824]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:19 compute-0 podman[99871]: 2026-01-31 08:01:19.109857195 +0000 UTC m=+0.036782666 container create c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_leakey, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:01:19 compute-0 systemd[1]: Started libpod-conmon-c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32.scope.
Jan 31 08:01:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:19 compute-0 podman[99871]: 2026-01-31 08:01:19.09174224 +0000 UTC m=+0.018667741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:01:19 compute-0 podman[99871]: 2026-01-31 08:01:19.190586598 +0000 UTC m=+0.117512099 container init c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:19 compute-0 podman[99871]: 2026-01-31 08:01:19.197195556 +0000 UTC m=+0.124121027 container start c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:01:19 compute-0 systemd[1]: libpod-c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32.scope: Deactivated successfully.
Jan 31 08:01:19 compute-0 inspiring_leakey[99888]: 167 167
Jan 31 08:01:19 compute-0 conmon[99888]: conmon c8f0e32a377fc0859b8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32.scope/container/memory.events
Jan 31 08:01:19 compute-0 podman[99871]: 2026-01-31 08:01:19.204096702 +0000 UTC m=+0.131022163 container attach c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:01:19 compute-0 podman[99871]: 2026-01-31 08:01:19.204872914 +0000 UTC m=+0.131798385 container died c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-19c91ef55fc163699e4b84ef9a42f87c8502287b2b694d29c951c944266df5a6-merged.mount: Deactivated successfully.
Jan 31 08:01:19 compute-0 podman[99871]: 2026-01-31 08:01:19.28848295 +0000 UTC m=+0.215408421 container remove c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:19 compute-0 systemd[1]: libpod-conmon-c8f0e32a377fc0859b8dbc0f3c77798fa6f7829e47801afa15f5348a0dc60b32.scope: Deactivated successfully.
Jan 31 08:01:19 compute-0 podman[99915]: 2026-01-31 08:01:19.421808157 +0000 UTC m=+0.042994832 container create 46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:01:19 compute-0 systemd[1]: Started libpod-conmon-46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf.scope.
Jan 31 08:01:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 08:01:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 08:01:19 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 08:01:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc09f72533a147106f79297d0da3665b31b67f658ef29637700cbf6b93011d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.026473999s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690338135s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.026487350s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690399170s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.026395798s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690338135s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.026410103s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690399170s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.026061058s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690139771s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.025776863s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690139771s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.006764412s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.671356201s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.025711060s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690383911s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.025663376s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690383911s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.025546074s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690338135s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.025491714s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690338135s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.025526047s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690444946s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.006489754s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.671356201s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.025470734s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690444946s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.024843216s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690475464s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.5( v 81'553 (0'0,81'553] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.024774551s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=76'552 lcod 76'552 active pruub 188.690475464s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.024779320s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690475464s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.5( v 81'553 (0'0,81'553] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.024687767s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=76'552 lcod 76'552 unknown NOTIFY pruub 188.690475464s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.024635315s) [0] async=[0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 active pruub 188.690567017s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 82 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82 pruub=15.024595261s) [0] r=-1 lpr=82 pi=[75,82)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690567017s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-mon[75294]: 3.6 scrub starts
Jan 31 08:01:19 compute-0 ceph-mon[75294]: 3.6 scrub ok
Jan 31 08:01:19 compute-0 ceph-mon[75294]: 5.13 scrub starts
Jan 31 08:01:19 compute-0 ceph-mon[75294]: 5.13 scrub ok
Jan 31 08:01:19 compute-0 ceph-mon[75294]: osdmap e81: 3 total, 3 up, 3 in
Jan 31 08:01:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:01:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:01:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:01:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:01:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:01:19 compute-0 ceph-mon[75294]: pgmap v206: 305 pgs: 3 active+recovery_wait, 2 active+recovery_wait+degraded, 11 peering, 1 active+recovering, 288 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 2/251 objects degraded (0.797%); 1/251 objects misplaced (0.398%); 250 B/s, 1 objects/s recovering
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.5( v 81'553 (0'0,81'553] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 pct=0'0 crt=76'552 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:19 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 82 pg[9.5( v 81'553 (0'0,81'553] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=76'552 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc09f72533a147106f79297d0da3665b31b67f658ef29637700cbf6b93011d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:19 compute-0 podman[99915]: 2026-01-31 08:01:19.401753648 +0000 UTC m=+0.022940323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:01:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc09f72533a147106f79297d0da3665b31b67f658ef29637700cbf6b93011d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc09f72533a147106f79297d0da3665b31b67f658ef29637700cbf6b93011d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc09f72533a147106f79297d0da3665b31b67f658ef29637700cbf6b93011d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:19 compute-0 podman[99915]: 2026-01-31 08:01:19.514617874 +0000 UTC m=+0.135804599 container init 46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:01:19 compute-0 podman[99915]: 2026-01-31 08:01:19.519430661 +0000 UTC m=+0.140617346 container start 46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_morse, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:01:19 compute-0 podman[99915]: 2026-01-31 08:01:19.527456898 +0000 UTC m=+0.148643583 container attach 46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_morse, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:01:19 compute-0 ceph-mon[75294]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/251 objects degraded (0.797%), 2 pgs degraded (PG_DEGRADED)
Jan 31 08:01:19 compute-0 trusting_morse[99932]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:01:19 compute-0 trusting_morse[99932]: --> All data devices are unavailable
Jan 31 08:01:19 compute-0 podman[99915]: 2026-01-31 08:01:19.9977743 +0000 UTC m=+0.618960985 container died 46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:01:19 compute-0 systemd[1]: libpod-46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf.scope: Deactivated successfully.
Jan 31 08:01:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc09f72533a147106f79297d0da3665b31b67f658ef29637700cbf6b93011d0-merged.mount: Deactivated successfully.
Jan 31 08:01:20 compute-0 podman[99915]: 2026-01-31 08:01:20.080348136 +0000 UTC m=+0.701534811 container remove 46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:01:20 compute-0 systemd[1]: libpod-conmon-46f410ad54a6737921d3aa2e5cb609c67797aeb288a64901f649f90a1e7818cf.scope: Deactivated successfully.
Jan 31 08:01:20 compute-0 sudo[99827]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:20 compute-0 sudo[99964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:20 compute-0 sudo[99964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:20 compute-0 sudo[99964]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:20 compute-0 sudo[99989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:01:20 compute-0 sudo[99989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:20 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 31 08:01:20 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 31 08:01:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 08:01:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 08:01:20 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.143117905s) [0] async=[0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 active pruub 188.690597534s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.143073082s) [0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690597534s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.143027306s) [0] async=[0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 active pruub 188.690582275s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142850876s) [0] async=[0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 active pruub 188.690628052s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142778397s) [0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690628052s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142585754s) [0] async=[0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 active pruub 188.690521240s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=80/81 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142543793s) [0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690521240s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142110825s) [0] async=[0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 active pruub 188.690170288s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142082214s) [0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690170288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142371178s) [0] async=[0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 active pruub 188.690490723s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142325401s) [0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690490723s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:20 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 83 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=80/81 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83 pruub=14.142971039s) [0] r=-1 lpr=83 pi=[75,83)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 188.690582275s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.1b( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.1d( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.3( v 71'551 (0'0,71'551] local-lis/les=82/83 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.1( v 71'551 (0'0,71'551] local-lis/les=82/83 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.9( v 71'551 (0'0,71'551] local-lis/les=82/83 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.5( v 81'553 (0'0,81'553] local-lis/les=82/83 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=81'553 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.11( v 71'551 (0'0,71'551] local-lis/les=82/83 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.d( v 71'551 (0'0,71'551] local-lis/les=82/83 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 83 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=82) [0] r=0 lpr=82 pi=[75,82)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:20 compute-0 podman[100024]: 2026-01-31 08:01:20.46304674 +0000 UTC m=+0.032442744 container create 071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:01:20 compute-0 ceph-mon[75294]: osdmap e82: 3 total, 3 up, 3 in
Jan 31 08:01:20 compute-0 ceph-mon[75294]: Health check failed: Degraded data redundancy: 2/251 objects degraded (0.797%), 2 pgs degraded (PG_DEGRADED)
Jan 31 08:01:20 compute-0 ceph-mon[75294]: osdmap e83: 3 total, 3 up, 3 in
Jan 31 08:01:20 compute-0 systemd[1]: Started libpod-conmon-071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51.scope.
Jan 31 08:01:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:20 compute-0 podman[100024]: 2026-01-31 08:01:20.446761727 +0000 UTC m=+0.016157751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:01:20 compute-0 podman[100024]: 2026-01-31 08:01:20.546453059 +0000 UTC m=+0.115849133 container init 071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_joliot, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:01:20 compute-0 podman[100024]: 2026-01-31 08:01:20.554314822 +0000 UTC m=+0.123710856 container start 071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:01:20 compute-0 sleepy_joliot[100040]: 167 167
Jan 31 08:01:20 compute-0 systemd[1]: libpod-071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51.scope: Deactivated successfully.
Jan 31 08:01:20 compute-0 podman[100024]: 2026-01-31 08:01:20.568279639 +0000 UTC m=+0.137675673 container attach 071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_joliot, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:01:20 compute-0 podman[100024]: 2026-01-31 08:01:20.568991869 +0000 UTC m=+0.138387903 container died 071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:01:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccccb1730a2fcf7bb72810221c735039e063320879093c8aab8ad7c7c9d6eb57-merged.mount: Deactivated successfully.
Jan 31 08:01:20 compute-0 podman[100024]: 2026-01-31 08:01:20.644013451 +0000 UTC m=+0.213409485 container remove 071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:01:20 compute-0 systemd[1]: libpod-conmon-071d836fd11b78436b0ca5e40b1c487e7307174c957a7fd08f0517abc11d4e51.scope: Deactivated successfully.
Jan 31 08:01:20 compute-0 podman[100066]: 2026-01-31 08:01:20.796330578 +0000 UTC m=+0.044348211 container create b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_morse, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:01:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 3 active+recovery_wait, 2 active+recovery_wait+degraded, 11 peering, 1 active+recovering, 288 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 2/251 objects degraded (0.797%); 1/251 objects misplaced (0.398%); 375 B/s, 1 objects/s recovering
Jan 31 08:01:20 compute-0 systemd[1]: Started libpod-conmon-b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b.scope.
Jan 31 08:01:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:20 compute-0 podman[100066]: 2026-01-31 08:01:20.775477836 +0000 UTC m=+0.023495499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ccf837575209dccb5e87ef31f588572f7b577444a9c93a004b9aae3280664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ccf837575209dccb5e87ef31f588572f7b577444a9c93a004b9aae3280664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ccf837575209dccb5e87ef31f588572f7b577444a9c93a004b9aae3280664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ccf837575209dccb5e87ef31f588572f7b577444a9c93a004b9aae3280664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:20 compute-0 podman[100066]: 2026-01-31 08:01:20.917561172 +0000 UTC m=+0.165578865 container init b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_morse, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:01:20 compute-0 podman[100066]: 2026-01-31 08:01:20.92239131 +0000 UTC m=+0.170408923 container start b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_morse, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:01:20 compute-0 podman[100066]: 2026-01-31 08:01:20.930701006 +0000 UTC m=+0.178718709 container attach b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_morse, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:01:21 compute-0 ceph-mgr[75591]: [progress INFO root] Writing back 17 completed events
Jan 31 08:01:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:01:21 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:21 compute-0 wonderful_morse[100083]: {
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:     "0": [
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:         {
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "devices": [
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "/dev/loop3"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             ],
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_name": "ceph_lv0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_size": "21470642176",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "name": "ceph_lv0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "tags": {
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cluster_name": "ceph",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.crush_device_class": "",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.encrypted": "0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.objectstore": "bluestore",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osd_id": "0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.type": "block",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.vdo": "0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.with_tpm": "0"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             },
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "type": "block",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "vg_name": "ceph_vg0"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:         }
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:     ],
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:     "1": [
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:         {
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "devices": [
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "/dev/loop4"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             ],
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_name": "ceph_lv1",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_size": "21470642176",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "name": "ceph_lv1",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "tags": {
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cluster_name": "ceph",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.crush_device_class": "",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.encrypted": "0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.objectstore": "bluestore",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osd_id": "1",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.type": "block",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.vdo": "0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.with_tpm": "0"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             },
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "type": "block",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "vg_name": "ceph_vg1"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:         }
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:     ],
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:     "2": [
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:         {
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "devices": [
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "/dev/loop5"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             ],
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_name": "ceph_lv2",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_size": "21470642176",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "name": "ceph_lv2",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "tags": {
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.cluster_name": "ceph",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.crush_device_class": "",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.encrypted": "0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.objectstore": "bluestore",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osd_id": "2",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.type": "block",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.vdo": "0",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:                 "ceph.with_tpm": "0"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             },
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "type": "block",
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:             "vg_name": "ceph_vg2"
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:         }
Jan 31 08:01:21 compute-0 wonderful_morse[100083]:     ]
Jan 31 08:01:21 compute-0 wonderful_morse[100083]: }
Jan 31 08:01:21 compute-0 systemd[1]: libpod-b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b.scope: Deactivated successfully.
Jan 31 08:01:21 compute-0 podman[100066]: 2026-01-31 08:01:21.207620142 +0000 UTC m=+0.455637785 container died b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 31 08:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed2ccf837575209dccb5e87ef31f588572f7b577444a9c93a004b9aae3280664-merged.mount: Deactivated successfully.
Jan 31 08:01:21 compute-0 podman[100066]: 2026-01-31 08:01:21.29058619 +0000 UTC m=+0.538603803 container remove b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_morse, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:21 compute-0 systemd[1]: libpod-conmon-b42d826b9a5d4f20dcbb69cce14e727b98af3e9475d5142352f93cac67bf8d0b.scope: Deactivated successfully.
Jan 31 08:01:21 compute-0 sudo[99989]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 08:01:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 08:01:21 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 08:01:21 compute-0 sudo[100106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:21 compute-0 sudo[100106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:21 compute-0 sudo[100106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:21 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 84 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:21 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 84 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:21 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 84 pg[9.b( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:21 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 84 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:21 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 84 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:21 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 84 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=80/75 les/c/f=81/76/0 sis=83) [0] r=0 lpr=83 pi=[75,83)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:21 compute-0 sudo[100131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:01:21 compute-0 sudo[100131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:21 compute-0 ceph-mon[75294]: 5.3 scrub starts
Jan 31 08:01:21 compute-0 ceph-mon[75294]: 5.3 scrub ok
Jan 31 08:01:21 compute-0 ceph-mon[75294]: pgmap v209: 305 pgs: 3 active+recovery_wait, 2 active+recovery_wait+degraded, 11 peering, 1 active+recovering, 288 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 2/251 objects degraded (0.797%); 1/251 objects misplaced (0.398%); 375 B/s, 1 objects/s recovering
Jan 31 08:01:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:21 compute-0 ceph-mon[75294]: osdmap e84: 3 total, 3 up, 3 in
Jan 31 08:01:21 compute-0 podman[100168]: 2026-01-31 08:01:21.67284985 +0000 UTC m=+0.040409939 container create 2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:01:21 compute-0 systemd[1]: Started libpod-conmon-2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2.scope.
Jan 31 08:01:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:21 compute-0 podman[100168]: 2026-01-31 08:01:21.741969154 +0000 UTC m=+0.109529263 container init 2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:21 compute-0 podman[100168]: 2026-01-31 08:01:21.650184696 +0000 UTC m=+0.017744785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:01:21 compute-0 podman[100168]: 2026-01-31 08:01:21.747250164 +0000 UTC m=+0.114810253 container start 2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:01:21 compute-0 vigorous_proskuriakova[100185]: 167 167
Jan 31 08:01:21 compute-0 systemd[1]: libpod-2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2.scope: Deactivated successfully.
Jan 31 08:01:21 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 31 08:01:21 compute-0 podman[100168]: 2026-01-31 08:01:21.757836254 +0000 UTC m=+0.125396353 container attach 2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:01:21 compute-0 podman[100168]: 2026-01-31 08:01:21.758164414 +0000 UTC m=+0.125724503 container died 2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:01:21 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 31 08:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-66dc9f3b6ee13e305dba80d38df7fc7b15b055efb8f6ceb7428319b1d4f6515e-merged.mount: Deactivated successfully.
Jan 31 08:01:21 compute-0 podman[100168]: 2026-01-31 08:01:21.823530871 +0000 UTC m=+0.191090960 container remove 2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:01:21 compute-0 systemd[1]: libpod-conmon-2ae56855bf6a46a8aa328bfcd75312651088f72fa771bf0302d1464b684041c2.scope: Deactivated successfully.
Jan 31 08:01:21 compute-0 podman[100209]: 2026-01-31 08:01:21.956294513 +0000 UTC m=+0.041853341 container create 8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_robinson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:01:21 compute-0 systemd[1]: Started libpod-conmon-8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03.scope.
Jan 31 08:01:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d8bc8d29dadf06d1cb3fb7db6c9812f78f9875815596c8e4ee598610606c1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d8bc8d29dadf06d1cb3fb7db6c9812f78f9875815596c8e4ee598610606c1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:22 compute-0 podman[100209]: 2026-01-31 08:01:21.932761804 +0000 UTC m=+0.018320672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d8bc8d29dadf06d1cb3fb7db6c9812f78f9875815596c8e4ee598610606c1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d8bc8d29dadf06d1cb3fb7db6c9812f78f9875815596c8e4ee598610606c1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:22 compute-0 podman[100209]: 2026-01-31 08:01:22.047296208 +0000 UTC m=+0.132855036 container init 8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:01:22 compute-0 podman[100209]: 2026-01-31 08:01:22.05475526 +0000 UTC m=+0.140314078 container start 8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:01:22 compute-0 podman[100209]: 2026-01-31 08:01:22.169933023 +0000 UTC m=+0.255491871 container attach 8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:01:22 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 31 08:01:22 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 31 08:01:22 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 31 08:01:22 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 31 08:01:22 compute-0 ceph-mon[75294]: 2.1 scrub starts
Jan 31 08:01:22 compute-0 ceph-mon[75294]: 2.1 scrub ok
Jan 31 08:01:22 compute-0 lvm[100311]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:01:22 compute-0 lvm[100311]: VG ceph_vg0 finished
Jan 31 08:01:22 compute-0 lvm[100312]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:01:22 compute-0 lvm[100312]: VG ceph_vg1 finished
Jan 31 08:01:22 compute-0 lvm[100314]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:01:22 compute-0 lvm[100314]: VG ceph_vg2 finished
Jan 31 08:01:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 3 active+recovery_wait, 2 active+recovery_wait+degraded, 1 active+recovering, 299 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 2/251 objects degraded (0.797%); 1/251 objects misplaced (0.398%); 0 B/s, 0 objects/s recovering
Jan 31 08:01:22 compute-0 jovial_robinson[100226]: {}
Jan 31 08:01:22 compute-0 systemd[1]: libpod-8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03.scope: Deactivated successfully.
Jan 31 08:01:22 compute-0 systemd[1]: libpod-8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03.scope: Consumed 1.084s CPU time.
Jan 31 08:01:22 compute-0 podman[100209]: 2026-01-31 08:01:22.834390079 +0000 UTC m=+0.919948897 container died 8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_robinson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:01:23 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 31 08:01:23 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 31 08:01:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-14d8bc8d29dadf06d1cb3fb7db6c9812f78f9875815596c8e4ee598610606c1a-merged.mount: Deactivated successfully.
Jan 31 08:01:23 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 08:01:23 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 08:01:24 compute-0 ceph-mon[75294]: 5.5 scrub starts
Jan 31 08:01:24 compute-0 ceph-mon[75294]: 5.5 scrub ok
Jan 31 08:01:24 compute-0 ceph-mon[75294]: 5.12 scrub starts
Jan 31 08:01:24 compute-0 ceph-mon[75294]: 5.12 scrub ok
Jan 31 08:01:24 compute-0 ceph-mon[75294]: pgmap v211: 305 pgs: 3 active+recovery_wait, 2 active+recovery_wait+degraded, 1 active+recovering, 299 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 2/251 objects degraded (0.797%); 1/251 objects misplaced (0.398%); 0 B/s, 0 objects/s recovering
Jan 31 08:01:24 compute-0 podman[100209]: 2026-01-31 08:01:24.433570972 +0000 UTC m=+2.519129800 container remove 8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:24 compute-0 sudo[100131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:01:24 compute-0 systemd[1]: libpod-conmon-8d61c6b12dbd977b06e52d7389f4452b2132fe63974c3a332984708dd3278a03.scope: Deactivated successfully.
Jan 31 08:01:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:01:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 723 B/s, 17 objects/s recovering
Jan 31 08:01:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 08:01:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:01:24 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 31 08:01:24 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 31 08:01:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:25 compute-0 sudo[100332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:01:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 08:01:25 compute-0 sudo[100332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:25 compute-0 sudo[100332]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 08:01:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:25 compute-0 ceph-mon[75294]: 2.15 scrub starts
Jan 31 08:01:25 compute-0 ceph-mon[75294]: 2.15 scrub ok
Jan 31 08:01:25 compute-0 ceph-mon[75294]: 5.6 scrub starts
Jan 31 08:01:25 compute-0 ceph-mon[75294]: 5.6 scrub ok
Jan 31 08:01:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:25 compute-0 ceph-mon[75294]: pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 723 B/s, 17 objects/s recovering
Jan 31 08:01:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:01:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 08:01:25 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/251 objects degraded (0.797%), 2 pgs degraded)
Jan 31 08:01:25 compute-0 ceph-mon[75294]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 08:01:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:01:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 08:01:25 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 08:01:26 compute-0 ceph-mon[75294]: 5.e scrub starts
Jan 31 08:01:26 compute-0 ceph-mon[75294]: 5.e scrub ok
Jan 31 08:01:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:01:26 compute-0 ceph-mon[75294]: 5.16 scrub starts
Jan 31 08:01:26 compute-0 ceph-mon[75294]: 5.16 scrub ok
Jan 31 08:01:26 compute-0 ceph-mon[75294]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/251 objects degraded (0.797%), 2 pgs degraded)
Jan 31 08:01:26 compute-0 ceph-mon[75294]: Cluster is now healthy
Jan 31 08:01:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:01:26 compute-0 ceph-mon[75294]: osdmap e85: 3 total, 3 up, 3 in
Jan 31 08:01:26 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 31 08:01:26 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 31 08:01:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 674 B/s, 16 objects/s recovering
Jan 31 08:01:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 08:01:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:01:27 compute-0 sudo[99812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 08:01:27 compute-0 ceph-mon[75294]: 5.d scrub starts
Jan 31 08:01:27 compute-0 ceph-mon[75294]: 5.d scrub ok
Jan 31 08:01:27 compute-0 ceph-mon[75294]: pgmap v214: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 674 B/s, 16 objects/s recovering
Jan 31 08:01:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:01:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:01:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 08:01:27 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 08:01:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 08:01:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 08:01:27 compute-0 sshd-session[99351]: Connection closed by 192.168.122.30 port 60856
Jan 31 08:01:27 compute-0 sshd-session[99348]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:01:27 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 08:01:27 compute-0 systemd[1]: session-34.scope: Consumed 7.695s CPU time.
Jan 31 08:01:27 compute-0 systemd-logind[810]: Session 34 logged out. Waiting for processes to exit.
Jan 31 08:01:27 compute-0 systemd-logind[810]: Removed session 34.
Jan 31 08:01:28 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 31 08:01:28 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 31 08:01:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:01:28 compute-0 ceph-mon[75294]: osdmap e86: 3 total, 3 up, 3 in
Jan 31 08:01:28 compute-0 ceph-mon[75294]: 5.1b scrub starts
Jan 31 08:01:28 compute-0 ceph-mon[75294]: 5.1b scrub ok
Jan 31 08:01:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 584 B/s, 14 objects/s recovering
Jan 31 08:01:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 08:01:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:01:29 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 08:01:29 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 08:01:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 08:01:29 compute-0 ceph-mon[75294]: 2.2 scrub starts
Jan 31 08:01:29 compute-0 ceph-mon[75294]: 2.2 scrub ok
Jan 31 08:01:29 compute-0 ceph-mon[75294]: pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 584 B/s, 14 objects/s recovering
Jan 31 08:01:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:01:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:01:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 08:01:29 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 08:01:29 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 31 08:01:29 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 31 08:01:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:30 compute-0 ceph-mon[75294]: 5.9 scrub starts
Jan 31 08:01:30 compute-0 ceph-mon[75294]: 5.9 scrub ok
Jan 31 08:01:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:01:30 compute-0 ceph-mon[75294]: osdmap e87: 3 total, 3 up, 3 in
Jan 31 08:01:30 compute-0 ceph-mon[75294]: 4.18 scrub starts
Jan 31 08:01:30 compute-0 ceph-mon[75294]: 4.18 scrub ok
Jan 31 08:01:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 08:01:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:01:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 08:01:31 compute-0 ceph-mon[75294]: pgmap v218: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:01:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:01:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 08:01:32 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.678183556s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 active pruub 200.289199829s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.678131104s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 200.289199829s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.677869797s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 active pruub 200.289794922s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.677823067s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 200.289794922s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.677802086s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 active pruub 200.289794922s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.678337097s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 active pruub 200.290390015s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.677743912s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 200.289794922s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:32 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 88 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88 pruub=13.678305626s) [2] r=-1 lpr=88 pi=[75,88)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 200.290390015s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:32 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 31 08:01:32 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 31 08:01:32 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 88 pg[9.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88) [2] r=0 lpr=88 pi=[75,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:32 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 88 pg[9.6( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88) [2] r=0 lpr=88 pi=[75,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:32 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 88 pg[9.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88) [2] r=0 lpr=88 pi=[75,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:32 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 88 pg[9.e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=88) [2] r=0 lpr=88 pi=[75,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 08:01:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:01:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 08:01:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:01:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 08:01:33 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.6( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.6( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[75,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:01:33 compute-0 ceph-mon[75294]: osdmap e88: 3 total, 3 up, 3 in
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-mon[75294]: 2.17 scrub starts
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-mon[75294]: 2.17 scrub ok
Jan 31 08:01:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 89 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89 pruub=12.096394539s) [2] r=-1 lpr=89 pi=[83,89)/1 crt=71'551 active pruub 203.388931274s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89 pruub=12.096348763s) [2] r=-1 lpr=89 pi=[83,89)/1 crt=71'551 unknown NOTIFY pruub 203.388931274s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89) [2] r=0 lpr=89 pi=[83,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89 pruub=12.099746704s) [2] r=-1 lpr=89 pi=[83,89)/1 crt=71'551 active pruub 203.393157959s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89 pruub=12.099705696s) [2] r=-1 lpr=89 pi=[83,89)/1 crt=71'551 unknown NOTIFY pruub 203.393157959s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=89 pruub=11.083349228s) [2] r=-1 lpr=89 pi=[82,89)/1 crt=71'551 active pruub 202.376815796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=89 pruub=11.083322525s) [2] r=-1 lpr=89 pi=[82,89)/1 crt=71'551 unknown NOTIFY pruub 202.376815796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89 pruub=12.099642754s) [2] r=-1 lpr=89 pi=[83,89)/1 crt=71'551 active pruub 203.393188477s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 89 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89 pruub=12.099621773s) [2] r=-1 lpr=89 pi=[83,89)/1 crt=71'551 unknown NOTIFY pruub 203.393188477s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89) [2] r=0 lpr=89 pi=[83,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=89) [2] r=0 lpr=89 pi=[82,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 89 pg[9.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=89) [2] r=0 lpr=89 pi=[83,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:33 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 08:01:33 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 08:01:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 08:01:34 compute-0 ceph-mon[75294]: pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:01:34 compute-0 ceph-mon[75294]: osdmap e89: 3 total, 3 up, 3 in
Jan 31 08:01:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 08:01:34 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 08:01:34 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 31 08:01:34 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[83,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[83,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[83,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[83,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[82,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[82,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[83,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 90 pg[9.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[83,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=90) [2]/[0] r=0 lpr=90 pi=[82,90)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=90) [2]/[0] r=0 lpr=90 pi=[82,90)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:34 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 90 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=89/90 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 90 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=83/84 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:34 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 90 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=89/90 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:34 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 90 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=89/90 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:34 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 90 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=89/90 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[75,89)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 08:01:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:01:35 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 31 08:01:35 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 31 08:01:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 08:01:35 compute-0 ceph-mon[75294]: 3.18 scrub starts
Jan 31 08:01:35 compute-0 ceph-mon[75294]: 3.18 scrub ok
Jan 31 08:01:35 compute-0 ceph-mon[75294]: osdmap e90: 3 total, 3 up, 3 in
Jan 31 08:01:35 compute-0 ceph-mon[75294]: 5.4 scrub starts
Jan 31 08:01:35 compute-0 ceph-mon[75294]: 5.4 scrub ok
Jan 31 08:01:35 compute-0 ceph-mon[75294]: pgmap v223: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:01:35 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:01:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 08:01:35 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=89/90 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.889571190s) [2] async=[2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 active pruub 204.408737183s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=89/90 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.889181137s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 204.408737183s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=89/90 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.937019348s) [2] async=[2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 active pruub 204.457061768s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=89/90 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.936978340s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 204.457061768s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=91 pruub=10.769458771s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 active pruub 200.289764404s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=91 pruub=10.769433022s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 200.289764404s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=91 pruub=10.769622803s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 active pruub 200.290390015s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=91 pruub=10.769578934s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 200.290390015s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=89/90 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.898647308s) [2] async=[2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 active pruub 204.419586182s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=89/90 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.898605347s) [2] async=[2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 active pruub 204.419586182s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=89/90 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.898596764s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 204.419586182s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 91 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=89/90 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91 pruub=14.898551941s) [2] r=-1 lpr=91 pi=[75,91)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 204.419586182s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.18( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 91 pg[9.8( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 91 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=90/91 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:35 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 91 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=90/91 n=7 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:35 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 91 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=90/91 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[83,90)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:35 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 91 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=90/91 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[82,90)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 08:01:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 08:01:35 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 08:01:35 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 31 08:01:35 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.8( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[75,92)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.18( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[75,92)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.8( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[75,92)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.18( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[75,92)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 92 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=0 lpr=92 pi=[75,92)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 92 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=0 lpr=92 pi=[75,92)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 92 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=0 lpr=92 pi=[75,92)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:35 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 92 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] r=0 lpr=92 pi=[75,92)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.6( v 71'551 (0'0,71'551] local-lis/les=91/92 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.e( v 71'551 (0'0,71'551] local-lis/les=91/92 n=7 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 92 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=89/75 les/c/f=90/76/0 sis=91) [2] r=0 lpr=91 pi=[75,91)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:36 compute-0 ceph-mon[75294]: 2.11 scrub starts
Jan 31 08:01:36 compute-0 ceph-mon[75294]: 2.11 scrub ok
Jan 31 08:01:36 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:01:36 compute-0 ceph-mon[75294]: osdmap e91: 3 total, 3 up, 3 in
Jan 31 08:01:36 compute-0 ceph-mon[75294]: osdmap e92: 3 total, 3 up, 3 in
Jan 31 08:01:36 compute-0 ceph-mon[75294]: 4.9 scrub starts
Jan 31 08:01:36 compute-0 ceph-mon[75294]: 4.9 scrub ok
Jan 31 08:01:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 08:01:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 08:01:36 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=90/82 les/c/f=91/83/0 sis=93) [2] r=0 lpr=93 pi=[82,93)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 93 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=90/82 les/c/f=91/83/0 sis=93) [2] r=0 lpr=93 pi=[82,93)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=90/91 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93 pruub=14.919635773s) [2] async=[2] r=-1 lpr=93 pi=[83,93)/1 crt=71'551 active pruub 209.384155273s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=90/91 n=6 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93 pruub=14.919472694s) [2] async=[2] r=-1 lpr=93 pi=[83,93)/1 crt=71'551 active pruub 209.384140015s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=90/91 n=6 ec=75/43 lis/c=90/82 les/c/f=91/83/0 sis=93 pruub=14.927224159s) [2] async=[2] r=-1 lpr=93 pi=[82,93)/1 crt=71'551 active pruub 209.391983032s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=90/91 n=6 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93 pruub=14.919405937s) [2] r=-1 lpr=93 pi=[83,93)/1 crt=71'551 unknown NOTIFY pruub 209.384140015s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=90/91 n=6 ec=75/43 lis/c=90/82 les/c/f=91/83/0 sis=93 pruub=14.927163124s) [2] r=-1 lpr=93 pi=[82,93)/1 crt=71'551 unknown NOTIFY pruub 209.391983032s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=90/91 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93 pruub=14.919059753s) [2] r=-1 lpr=93 pi=[83,93)/1 crt=71'551 unknown NOTIFY pruub 209.384155273s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=90/91 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93 pruub=14.918880463s) [2] async=[2] r=-1 lpr=93 pi=[83,93)/1 crt=71'551 active pruub 209.384094238s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:36 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 93 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=90/91 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93 pruub=14.918797493s) [2] r=-1 lpr=93 pi=[83,93)/1 crt=71'551 unknown NOTIFY pruub 209.384094238s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:36 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 93 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=92/93 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] async=[2] r=0 lpr=92 pi=[75,92)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:36 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 93 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=92/93 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=92) [2]/[1] async=[2] r=0 lpr=92 pi=[75,92)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 08:01:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:01:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 08:01:37 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:01:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 08:01:37 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 08:01:37 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 94 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=92/93 n=7 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94 pruub=15.000333786s) [2] async=[2] r=-1 lpr=94 pi=[75,94)/1 crt=71'551 lcod 0'0 active pruub 206.656875610s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:37 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 94 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=92/93 n=7 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94 pruub=15.000255585s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 206.656875610s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:37 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 94 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=92/93 n=6 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94 pruub=15.002530098s) [2] async=[2] r=-1 lpr=94 pi=[75,94)/1 crt=71'551 lcod 0'0 active pruub 206.659637451s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:37 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 94 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=92/93 n=6 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94 pruub=15.002467155s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 206.659637451s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:37 compute-0 ceph-mon[75294]: osdmap e93: 3 total, 3 up, 3 in
Jan 31 08:01:37 compute-0 ceph-mon[75294]: pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=93/94 n=6 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.7( v 71'551 (0'0,71'551] local-lis/les=93/94 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.f( v 71'551 (0'0,71'551] local-lis/les=93/94 n=7 ec=75/43 lis/c=90/83 les/c/f=91/84/0 sis=93) [2] r=0 lpr=93 pi=[83,93)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 94 pg[9.17( v 71'551 (0'0,71'551] local-lis/les=93/94 n=6 ec=75/43 lis/c=90/82 les/c/f=91/83/0 sis=93) [2] r=0 lpr=93 pi=[82,93)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 31 08:01:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 31 08:01:38 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 31 08:01:38 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 31 08:01:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 08:01:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 08:01:38 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 08:01:38 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 95 pg[9.8( v 71'551 (0'0,71'551] local-lis/les=94/95 n=7 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:38 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 95 pg[9.18( v 71'551 (0'0,71'551] local-lis/les=94/95 n=6 ec=75/43 lis/c=92/75 les/c/f=93/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:38 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 08:01:38 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:01:38 compute-0 ceph-mon[75294]: osdmap e94: 3 total, 3 up, 3 in
Jan 31 08:01:38 compute-0 ceph-mon[75294]: 2.d scrub starts
Jan 31 08:01:38 compute-0 ceph-mon[75294]: 2.d scrub ok
Jan 31 08:01:38 compute-0 ceph-mon[75294]: osdmap e95: 3 total, 3 up, 3 in
Jan 31 08:01:38 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 08:01:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 726 B/s, 17 objects/s recovering
Jan 31 08:01:39 compute-0 ceph-mon[75294]: 3.3 scrub starts
Jan 31 08:01:39 compute-0 ceph-mon[75294]: 3.3 scrub ok
Jan 31 08:01:39 compute-0 ceph-mon[75294]: 5.c scrub starts
Jan 31 08:01:39 compute-0 ceph-mon[75294]: 5.c scrub ok
Jan 31 08:01:39 compute-0 ceph-mon[75294]: pgmap v230: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 726 B/s, 17 objects/s recovering
Jan 31 08:01:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 471 B/s, 11 objects/s recovering
Jan 31 08:01:41 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 31 08:01:41 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 31 08:01:41 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 31 08:01:41 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 31 08:01:41 compute-0 ceph-mon[75294]: pgmap v231: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 471 B/s, 11 objects/s recovering
Jan 31 08:01:42 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 08:01:42 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 08:01:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 397 B/s, 9 objects/s recovering
Jan 31 08:01:42 compute-0 ceph-mon[75294]: 2.f scrub starts
Jan 31 08:01:42 compute-0 ceph-mon[75294]: 2.f scrub ok
Jan 31 08:01:42 compute-0 ceph-mon[75294]: 7.1c scrub starts
Jan 31 08:01:42 compute-0 ceph-mon[75294]: 7.1c scrub ok
Jan 31 08:01:43 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 31 08:01:43 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 31 08:01:44 compute-0 ceph-mon[75294]: 3.16 scrub starts
Jan 31 08:01:44 compute-0 ceph-mon[75294]: 3.16 scrub ok
Jan 31 08:01:44 compute-0 ceph-mon[75294]: pgmap v232: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 397 B/s, 9 objects/s recovering
Jan 31 08:01:44 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 08:01:44 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 08:01:44 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 08:01:44 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 08:01:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 315 B/s, 7 objects/s recovering
Jan 31 08:01:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 08:01:44 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:01:44 compute-0 sshd-session[100391]: Accepted publickey for zuul from 192.168.122.30 port 53494 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:01:44 compute-0 systemd-logind[810]: New session 35 of user zuul.
Jan 31 08:01:44 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 31 08:01:44 compute-0 sshd-session[100391]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:01:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 08:01:45 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 31 08:01:45 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 31 08:01:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:01:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 08:01:45 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 08:01:45 compute-0 ceph-mon[75294]: 4.11 scrub starts
Jan 31 08:01:45 compute-0 ceph-mon[75294]: 4.11 scrub ok
Jan 31 08:01:45 compute-0 ceph-mon[75294]: 4.d scrub starts
Jan 31 08:01:45 compute-0 ceph-mon[75294]: 4.d scrub ok
Jan 31 08:01:45 compute-0 ceph-mon[75294]: pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 315 B/s, 7 objects/s recovering
Jan 31 08:01:45 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:01:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:45 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 08:01:45 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 08:01:45 compute-0 python3.9[100544]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 08:01:46 compute-0 ceph-mon[75294]: 4.13 scrub starts
Jan 31 08:01:46 compute-0 ceph-mon[75294]: 4.13 scrub ok
Jan 31 08:01:46 compute-0 ceph-mon[75294]: 3.17 scrub starts
Jan 31 08:01:46 compute-0 ceph-mon[75294]: 3.17 scrub ok
Jan 31 08:01:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:01:46 compute-0 ceph-mon[75294]: osdmap e96: 3 total, 3 up, 3 in
Jan 31 08:01:46 compute-0 ceph-mon[75294]: 5.1d scrub starts
Jan 31 08:01:46 compute-0 ceph-mon[75294]: 5.1d scrub ok
Jan 31 08:01:46 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 31 08:01:46 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 31 08:01:46 compute-0 python3.9[100718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:01:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 08:01:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:01:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 08:01:47 compute-0 ceph-mon[75294]: 2.3 scrub starts
Jan 31 08:01:47 compute-0 ceph-mon[75294]: 2.3 scrub ok
Jan 31 08:01:47 compute-0 ceph-mon[75294]: pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:01:47 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:01:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 08:01:47 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 08:01:47 compute-0 sudo[100872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpuahyhqhulhjdlpvbdyjazvelofqrdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846507.0414581-40-159369579861786/AnsiballZ_command.py'
Jan 31 08:01:47 compute-0 sudo[100872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:47 compute-0 python3.9[100874]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:47 compute-0 sudo[100872]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:48 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 08:01:48 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 08:01:48 compute-0 sudo[101025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llykxcoimjuvwodxoukackpstqucxonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846507.826321-52-175142620041375/AnsiballZ_stat.py'
Jan 31 08:01:48 compute-0 sudo[101025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:48 compute-0 python3.9[101027]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:01:48 compute-0 sudo[101025]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:48 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 08:01:48 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 08:01:48 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:01:48 compute-0 ceph-mon[75294]: osdmap e97: 3 total, 3 up, 3 in
Jan 31 08:01:48 compute-0 ceph-mon[75294]: 7.9 scrub starts
Jan 31 08:01:48 compute-0 ceph-mon[75294]: 7.9 scrub ok
Jan 31 08:01:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 08:01:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:01:49 compute-0 sudo[101179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfyzhrhlqrxkiayspypdkordabuygvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846508.7903364-63-199500884220269/AnsiballZ_file.py'
Jan 31 08:01:49 compute-0 sudo[101179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:49 compute-0 python3.9[101181]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:01:49 compute-0 sudo[101179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:49 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 31 08:01:49 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 31 08:01:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 08:01:49 compute-0 sudo[101331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skshnlmcgfsxjjwmadtbofhgbldfffvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846509.5526443-72-83363451772774/AnsiballZ_file.py'
Jan 31 08:01:49 compute-0 sudo[101331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:49 compute-0 ceph-mon[75294]: 4.7 scrub starts
Jan 31 08:01:49 compute-0 ceph-mon[75294]: 4.7 scrub ok
Jan 31 08:01:49 compute-0 ceph-mon[75294]: pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:49 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:01:50 compute-0 python3.9[101333]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:01:50 compute-0 sudo[101331]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:01:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 08:01:50 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 08:01:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:01:50 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 31 08:01:50 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 31 08:01:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:01:50
Jan 31 08:01:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:01:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:01:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 31 08:01:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:01:50 compute-0 python3.9[101483]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:01:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 08:01:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:01:50 compute-0 network[101500]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:01:50 compute-0 network[101501]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:01:50 compute-0 network[101502]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:01:50 compute-0 ceph-mon[75294]: 2.5 scrub starts
Jan 31 08:01:50 compute-0 ceph-mon[75294]: 2.5 scrub ok
Jan 31 08:01:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:01:50 compute-0 ceph-mon[75294]: osdmap e98: 3 total, 3 up, 3 in
Jan 31 08:01:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:01:51 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 31 08:01:51 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 31 08:01:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 08:01:51 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:01:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 08:01:51 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 08:01:51 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 98 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=10.631234169s) [2] r=-1 lpr=98 pi=[75,98)/1 crt=71'551 lcod 0'0 active pruub 216.289703369s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:51 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 99 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=10.631171227s) [2] r=-1 lpr=98 pi=[75,98)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 216.289703369s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:51 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 98 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=10.631330490s) [2] r=-1 lpr=98 pi=[75,98)/1 crt=71'551 lcod 0'0 active pruub 216.290634155s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:51 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 99 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=98 pruub=10.631284714s) [2] r=-1 lpr=98 pi=[75,98)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 216.290634155s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:51 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 99 pg[9.c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=98) [2] r=0 lpr=99 pi=[75,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:51 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 99 pg[9.1c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=98) [2] r=0 lpr=99 pi=[75,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:52 compute-0 ceph-mon[75294]: 7.11 scrub starts
Jan 31 08:01:52 compute-0 ceph-mon[75294]: 7.11 scrub ok
Jan 31 08:01:52 compute-0 ceph-mon[75294]: pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:52 compute-0 ceph-mon[75294]: 7.6 scrub starts
Jan 31 08:01:52 compute-0 ceph-mon[75294]: 7.6 scrub ok
Jan 31 08:01:52 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:01:52 compute-0 ceph-mon[75294]: osdmap e99: 3 total, 3 up, 3 in
Jan 31 08:01:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 08:01:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 08:01:52 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 08:01:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 08:01:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:01:52 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 100 pg[9.1c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:52 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 100 pg[9.1c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:52 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 100 pg[9.c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:52 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 100 pg[9.c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:52 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 100 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=0 lpr=100 pi=[75,100)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:52 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 100 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=0 lpr=100 pi=[75,100)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:52 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 100 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=0 lpr=100 pi=[75,100)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:52 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 100 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=75/76 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] r=0 lpr=100 pi=[75,100)/1 crt=71'551 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:53 compute-0 python3.9[101762]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:01:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 08:01:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:01:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 08:01:53 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 08:01:54 compute-0 ceph-mon[75294]: osdmap e100: 3 total, 3 up, 3 in
Jan 31 08:01:54 compute-0 ceph-mon[75294]: pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:01:54 compute-0 python3.9[101912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:01:54 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 31 08:01:54 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:01:54 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:01:54 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 31 08:01:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 08:01:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:01:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 08:01:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:01:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:01:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:01:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:01:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:01:55 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 101 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=100/101 n=6 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] async=[2] r=0 lpr=100 pi=[75,100)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:55 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:01:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 08:01:55 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 101 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=100/101 n=7 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=100) [2]/[1] async=[2] r=0 lpr=100 pi=[75,100)/1 crt=71'551 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:55 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 08:01:55 compute-0 python3.9[102066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:01:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:01:55 compute-0 ceph-mon[75294]: osdmap e101: 3 total, 3 up, 3 in
Jan 31 08:01:55 compute-0 ceph-mon[75294]: 2.a scrub starts
Jan 31 08:01:55 compute-0 ceph-mon[75294]: 2.a scrub ok
Jan 31 08:01:55 compute-0 ceph-mon[75294]: pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:55 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:01:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 31 08:01:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 31 08:01:56 compute-0 sudo[102222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjvmvjihonrfclezaqadhxbugvuprlky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846516.0366654-120-129058971023819/AnsiballZ_setup.py'
Jan 31 08:01:56 compute-0 sudo[102222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:56 compute-0 python3.9[102224]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:01:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 08:01:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 31 08:01:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 08:01:56 compute-0 ceph-mon[75294]: 3.11 scrub starts
Jan 31 08:01:56 compute-0 ceph-mon[75294]: 3.11 scrub ok
Jan 31 08:01:56 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:01:56 compute-0 ceph-mon[75294]: osdmap e102: 3 total, 3 up, 3 in
Jan 31 08:01:56 compute-0 ceph-mon[75294]: 3.1 scrub starts
Jan 31 08:01:56 compute-0 ceph-mon[75294]: 3.1 scrub ok
Jan 31 08:01:56 compute-0 sudo[102222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 08:01:57 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 08:01:57 compute-0 sudo[102306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brqqvajhnznpxlktldpfxmflwnwsmejd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846516.0366654-120-129058971023819/AnsiballZ_dnf.py'
Jan 31 08:01:57 compute-0 sudo[102306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:57 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 103 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=100/101 n=7 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103 pruub=14.141616821s) [2] async=[2] r=-1 lpr=103 pi=[75,103)/1 crt=71'551 lcod 0'0 active pruub 225.745681763s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:57 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 103 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=100/101 n=6 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103 pruub=14.093406677s) [2] async=[2] r=-1 lpr=103 pi=[75,103)/1 crt=71'551 lcod 0'0 active pruub 225.698028564s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:57 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 103 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=100/101 n=6 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103 pruub=14.093349457s) [2] r=-1 lpr=103 pi=[75,103)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 225.698028564s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:57 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 103 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=100/101 n=7 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103 pruub=14.140866280s) [2] r=-1 lpr=103 pi=[75,103)/1 crt=71'551 lcod 0'0 unknown NOTIFY pruub 225.745681763s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:01:57 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 103 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103) [2] r=0 lpr=103 pi=[75,103)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:57 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 103 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=0/0 n=7 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103) [2] r=0 lpr=103 pi=[75,103)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:57 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 103 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103) [2] r=0 lpr=103 pi=[75,103)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:01:57 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 103 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103) [2] r=0 lpr=103 pi=[75,103)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:01:57 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 31 08:01:57 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 31 08:01:57 compute-0 python3.9[102308]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:01:58 compute-0 ceph-mon[75294]: pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:01:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 08:01:58 compute-0 ceph-mon[75294]: osdmap e103: 3 total, 3 up, 3 in
Jan 31 08:01:58 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 08:01:58 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 08:01:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 08:01:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 08:01:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 08:01:58 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 08:01:58 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 104 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=103/104 n=6 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103) [2] r=0 lpr=103 pi=[75,103)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:58 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 104 pg[9.c( v 71'551 (0'0,71'551] local-lis/les=103/104 n=7 ec=75/43 lis/c=100/75 les/c/f=101/76/0 sis=103) [2] r=0 lpr=103 pi=[75,103)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:01:58 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 31 08:01:58 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 31 08:01:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 86 B/s, 2 objects/s recovering
Jan 31 08:01:59 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 31 08:01:59 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 31 08:01:59 compute-0 ceph-mon[75294]: 4.4 scrub starts
Jan 31 08:01:59 compute-0 ceph-mon[75294]: 4.4 scrub ok
Jan 31 08:01:59 compute-0 ceph-mon[75294]: 5.2 scrub starts
Jan 31 08:01:59 compute-0 ceph-mon[75294]: 5.2 scrub ok
Jan 31 08:01:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 08:01:59 compute-0 ceph-mon[75294]: osdmap e104: 3 total, 3 up, 3 in
Jan 31 08:01:59 compute-0 ceph-mon[75294]: pgmap v249: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 86 B/s, 2 objects/s recovering
Jan 31 08:01:59 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 31 08:01:59 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 31 08:02:00 compute-0 ceph-mon[75294]: 7.15 scrub starts
Jan 31 08:02:00 compute-0 ceph-mon[75294]: 7.15 scrub ok
Jan 31 08:02:00 compute-0 ceph-mon[75294]: 3.1b scrub starts
Jan 31 08:02:00 compute-0 ceph-mon[75294]: 3.1b scrub ok
Jan 31 08:02:00 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 31 08:02:00 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 31 08:02:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 31 08:02:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 31 08:02:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 08:02:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:01 compute-0 ceph-mon[75294]: 7.a scrub starts
Jan 31 08:02:01 compute-0 ceph-mon[75294]: 7.a scrub ok
Jan 31 08:02:01 compute-0 ceph-mon[75294]: 2.4 scrub starts
Jan 31 08:02:01 compute-0 ceph-mon[75294]: 2.4 scrub ok
Jan 31 08:02:01 compute-0 ceph-mon[75294]: 7.5 scrub starts
Jan 31 08:02:01 compute-0 ceph-mon[75294]: 7.5 scrub ok
Jan 31 08:02:01 compute-0 ceph-mon[75294]: pgmap v250: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 08:02:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 08:02:02 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 08:02:02 compute-0 ceph-mon[75294]: 5.7 scrub starts
Jan 31 08:02:02 compute-0 ceph-mon[75294]: 5.7 scrub ok
Jan 31 08:02:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 1 objects/s recovering
Jan 31 08:02:02 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 08:02:02 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 08:02:03 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 31 08:02:03 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 31 08:02:03 compute-0 ceph-mon[75294]: pgmap v251: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 1 objects/s recovering
Jan 31 08:02:03 compute-0 ceph-mon[75294]: 3.c scrub starts
Jan 31 08:02:03 compute-0 ceph-mon[75294]: 3.c scrub ok
Jan 31 08:02:03 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 31 08:02:03 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 31 08:02:04 compute-0 ceph-mon[75294]: 4.12 scrub starts
Jan 31 08:02:04 compute-0 ceph-mon[75294]: 4.12 scrub ok
Jan 31 08:02:04 compute-0 ceph-mon[75294]: 7.4 scrub starts
Jan 31 08:02:04 compute-0 ceph-mon[75294]: 7.4 scrub ok
Jan 31 08:02:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Jan 31 08:02:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 31 08:02:04 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 08:02:05 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 31 08:02:05 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 31 08:02:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 08:02:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 08:02:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 08:02:05 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 08:02:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:05 compute-0 ceph-mon[75294]: pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Jan 31 08:02:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 08:02:05 compute-0 ceph-mon[75294]: 7.1f scrub starts
Jan 31 08:02:05 compute-0 ceph-mon[75294]: 7.1f scrub ok
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.027372988808735e-06 of space, bias 4.0, pg target 0.002432847586570482 quantized to 16 (current 16)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:02:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 31 08:02:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 08:02:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 08:02:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 08:02:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 08:02:06 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 08:02:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 08:02:06 compute-0 ceph-mon[75294]: osdmap e105: 3 total, 3 up, 3 in
Jan 31 08:02:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 08:02:07 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 08:02:07 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 08:02:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 31 08:02:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 31 08:02:07 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 31 08:02:07 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 31 08:02:07 compute-0 ceph-mon[75294]: pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 08:02:07 compute-0 ceph-mon[75294]: osdmap e106: 3 total, 3 up, 3 in
Jan 31 08:02:07 compute-0 ceph-mon[75294]: 5.1e scrub starts
Jan 31 08:02:07 compute-0 ceph-mon[75294]: 5.1e scrub ok
Jan 31 08:02:08 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 08:02:08 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 08:02:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 31 08:02:08 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 08:02:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 08:02:09 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 08:02:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 08:02:09 compute-0 ceph-mon[75294]: 2.7 scrub starts
Jan 31 08:02:09 compute-0 ceph-mon[75294]: 2.7 scrub ok
Jan 31 08:02:09 compute-0 ceph-mon[75294]: 3.e scrub starts
Jan 31 08:02:09 compute-0 ceph-mon[75294]: 3.e scrub ok
Jan 31 08:02:09 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 08:02:09 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 107 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=107 pruub=8.387985229s) [2] r=-1 lpr=107 pi=[83,107)/1 crt=71'551 active pruub 235.393859863s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:09 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 107 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=107 pruub=8.387948036s) [2] r=-1 lpr=107 pi=[83,107)/1 crt=71'551 unknown NOTIFY pruub 235.393859863s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:09 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 107 pg[9.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=107) [2] r=0 lpr=107 pi=[83,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:09 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 08:02:09 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 08:02:09 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 08:02:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 08:02:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 08:02:10 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 08:02:10 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 108 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=108) [2]/[0] r=0 lpr=108 pi=[83,108)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:10 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 108 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=108) [2]/[0] r=0 lpr=108 pi=[83,108)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:10 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 108 pg[9.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[83,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:10 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 108 pg[9.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[83,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 08:02:10 compute-0 ceph-mon[75294]: 7.8 scrub starts
Jan 31 08:02:10 compute-0 ceph-mon[75294]: 7.8 scrub ok
Jan 31 08:02:10 compute-0 ceph-mon[75294]: pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:10 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 08:02:10 compute-0 ceph-mon[75294]: osdmap e107: 3 total, 3 up, 3 in
Jan 31 08:02:10 compute-0 ceph-mon[75294]: 2.9 scrub starts
Jan 31 08:02:10 compute-0 ceph-mon[75294]: 2.9 scrub ok
Jan 31 08:02:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 08:02:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 31 08:02:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 08:02:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:10 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 31 08:02:10 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 31 08:02:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 08:02:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 08:02:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 08:02:11 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 08:02:11 compute-0 ceph-mon[75294]: osdmap e108: 3 total, 3 up, 3 in
Jan 31 08:02:11 compute-0 ceph-mon[75294]: 2.19 scrub starts
Jan 31 08:02:11 compute-0 ceph-mon[75294]: 2.19 scrub ok
Jan 31 08:02:11 compute-0 ceph-mon[75294]: pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 08:02:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 08:02:11 compute-0 ceph-mon[75294]: osdmap e109: 3 total, 3 up, 3 in
Jan 31 08:02:11 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 109 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=108/109 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[83,108)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 08:02:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 08:02:12 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 08:02:12 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 110 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=108/109 n=6 ec=75/43 lis/c=108/83 les/c/f=109/84/0 sis=110 pruub=14.948723793s) [2] async=[2] r=-1 lpr=110 pi=[83,110)/1 crt=71'551 active pruub 245.052490234s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:12 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 110 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=108/109 n=6 ec=75/43 lis/c=108/83 les/c/f=109/84/0 sis=110 pruub=14.948637962s) [2] r=-1 lpr=110 pi=[83,110)/1 crt=71'551 unknown NOTIFY pruub 245.052490234s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:12 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 110 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=108/83 les/c/f=109/84/0 sis=110) [2] r=0 lpr=110 pi=[83,110)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:12 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 110 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=108/83 les/c/f=109/84/0 sis=110) [2] r=0 lpr=110 pi=[83,110)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:12 compute-0 ceph-mon[75294]: 4.a scrub starts
Jan 31 08:02:12 compute-0 ceph-mon[75294]: 4.a scrub ok
Jan 31 08:02:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 31 08:02:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 08:02:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 08:02:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 08:02:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 08:02:13 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 08:02:13 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 111 pg[9.13( v 71'551 (0'0,71'551] local-lis/les=110/111 n=6 ec=75/43 lis/c=108/83 les/c/f=109/84/0 sis=110) [2] r=0 lpr=110 pi=[83,110)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:13 compute-0 ceph-mon[75294]: osdmap e110: 3 total, 3 up, 3 in
Jan 31 08:02:13 compute-0 ceph-mon[75294]: pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 08:02:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 08:02:13 compute-0 ceph-mon[75294]: osdmap e111: 3 total, 3 up, 3 in
Jan 31 08:02:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 111 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=111 pruub=10.305939674s) [1] r=-1 lpr=111 pi=[82,111)/1 crt=71'551 active pruub 242.373519897s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 111 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=111 pruub=10.305899620s) [1] r=-1 lpr=111 pi=[82,111)/1 crt=71'551 unknown NOTIFY pruub 242.373519897s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 111 pg[9.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=111) [1] r=0 lpr=111 pi=[82,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 08:02:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 08:02:14 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 08:02:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=112) [1]/[0] r=-1 lpr=112 pi=[82,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:14 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=112) [1]/[0] r=-1 lpr=112 pi=[82,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 112 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=112) [1]/[0] r=0 lpr=112 pi=[82,112)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:14 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 112 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=82/83 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=112) [1]/[0] r=0 lpr=112 pi=[82,112)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:14 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 08:02:14 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 08:02:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 31 08:02:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 31 08:02:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 08:02:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 08:02:15 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 08:02:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 08:02:15 compute-0 ceph-mon[75294]: osdmap e112: 3 total, 3 up, 3 in
Jan 31 08:02:15 compute-0 ceph-mon[75294]: 4.5 scrub starts
Jan 31 08:02:15 compute-0 ceph-mon[75294]: 4.5 scrub ok
Jan 31 08:02:15 compute-0 ceph-mon[75294]: pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 31 08:02:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 08:02:15 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 08:02:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 113 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=112/113 n=6 ec=75/43 lis/c=82/82 les/c/f=83/83/0 sis=112) [1]/[0] async=[1] r=0 lpr=112 pi=[82,112)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:15 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 08:02:15 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 08:02:15 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 113 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=113 pruub=15.794287682s) [0] r=-1 lpr=113 pi=[91,113)/1 crt=71'551 active pruub 241.304260254s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:15 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 113 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=113 pruub=15.794235229s) [0] r=-1 lpr=113 pi=[91,113)/1 crt=71'551 unknown NOTIFY pruub 241.304260254s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 113 pg[9.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=113) [0] r=0 lpr=113 pi=[91,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 08:02:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 08:02:15 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 08:02:15 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 114 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=114) [0]/[2] r=0 lpr=114 pi=[91,114)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:15 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 114 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=114) [0]/[2] r=0 lpr=114 pi=[91,114)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[91,114)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[91,114)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 114 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=112/113 n=6 ec=75/43 lis/c=112/82 les/c/f=113/83/0 sis=114 pruub=15.340257645s) [1] async=[1] r=-1 lpr=114 pi=[82,114)/1 crt=71'551 active pruub 249.192123413s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:15 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 114 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=112/113 n=6 ec=75/43 lis/c=112/82 les/c/f=113/83/0 sis=114 pruub=15.340167999s) [1] r=-1 lpr=114 pi=[82,114)/1 crt=71'551 unknown NOTIFY pruub 249.192123413s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:15 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 114 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=112/82 les/c/f=113/83/0 sis=114) [1] r=0 lpr=114 pi=[82,114)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:15 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 114 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=112/82 les/c/f=113/83/0 sis=114) [1] r=0 lpr=114 pi=[82,114)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:16 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 08:02:16 compute-0 ceph-mon[75294]: osdmap e113: 3 total, 3 up, 3 in
Jan 31 08:02:16 compute-0 ceph-mon[75294]: 5.1 scrub starts
Jan 31 08:02:16 compute-0 ceph-mon[75294]: 5.1 scrub ok
Jan 31 08:02:16 compute-0 ceph-mon[75294]: osdmap e114: 3 total, 3 up, 3 in
Jan 31 08:02:16 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 08:02:16 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 08:02:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 31 08:02:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 31 08:02:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 08:02:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 08:02:16 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 08:02:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 08:02:16 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 08:02:16 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 115 pg[9.15( v 71'551 (0'0,71'551] local-lis/les=114/115 n=6 ec=75/43 lis/c=112/82 les/c/f=113/83/0 sis=114) [1] r=0 lpr=114 pi=[82,114)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:16 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 115 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=114/115 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[91,114)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:17 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 08:02:17 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 08:02:17 compute-0 ceph-mon[75294]: 4.f scrub starts
Jan 31 08:02:17 compute-0 ceph-mon[75294]: 4.f scrub ok
Jan 31 08:02:17 compute-0 ceph-mon[75294]: pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 31 08:02:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 08:02:17 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 08:02:17 compute-0 ceph-mon[75294]: osdmap e115: 3 total, 3 up, 3 in
Jan 31 08:02:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 08:02:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 08:02:17 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 08:02:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 116 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=114/115 n=6 ec=75/43 lis/c=114/91 les/c/f=115/92/0 sis=116 pruub=15.001611710s) [0] async=[0] r=-1 lpr=116 pi=[91,116)/1 crt=71'551 active pruub 242.610290527s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:17 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 116 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=114/115 n=6 ec=75/43 lis/c=114/91 les/c/f=115/92/0 sis=116 pruub=15.001532555s) [0] r=-1 lpr=116 pi=[91,116)/1 crt=71'551 unknown NOTIFY pruub 242.610290527s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 116 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=114/91 les/c/f=115/92/0 sis=116) [0] r=0 lpr=116 pi=[91,116)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:17 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 116 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=114/91 les/c/f=115/92/0 sis=116) [0] r=0 lpr=116 pi=[91,116)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:18 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 31 08:02:18 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 31 08:02:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Jan 31 08:02:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 08:02:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 08:02:18 compute-0 ceph-mon[75294]: 3.f scrub starts
Jan 31 08:02:18 compute-0 ceph-mon[75294]: 3.f scrub ok
Jan 31 08:02:18 compute-0 ceph-mon[75294]: osdmap e116: 3 total, 3 up, 3 in
Jan 31 08:02:18 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 08:02:18 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 117 pg[9.16( v 71'551 (0'0,71'551] local-lis/les=116/117 n=6 ec=75/43 lis/c=114/91 les/c/f=115/92/0 sis=116) [0] r=0 lpr=116 pi=[91,116)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:19 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 31 08:02:19 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 31 08:02:20 compute-0 ceph-mon[75294]: 2.18 scrub starts
Jan 31 08:02:20 compute-0 ceph-mon[75294]: 2.18 scrub ok
Jan 31 08:02:20 compute-0 ceph-mon[75294]: pgmap v271: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Jan 31 08:02:20 compute-0 ceph-mon[75294]: osdmap e117: 3 total, 3 up, 3 in
Jan 31 08:02:20 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 31 08:02:20 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 31 08:02:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 1 objects/s recovering
Jan 31 08:02:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:21 compute-0 ceph-mon[75294]: 4.1 scrub starts
Jan 31 08:02:21 compute-0 ceph-mon[75294]: 4.1 scrub ok
Jan 31 08:02:21 compute-0 ceph-mon[75294]: 2.1b scrub starts
Jan 31 08:02:21 compute-0 ceph-mon[75294]: 2.1b scrub ok
Jan 31 08:02:21 compute-0 ceph-mon[75294]: pgmap v273: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 44 B/s, 1 objects/s recovering
Jan 31 08:02:21 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 31 08:02:21 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 31 08:02:22 compute-0 ceph-mon[75294]: 4.2 scrub starts
Jan 31 08:02:22 compute-0 ceph-mon[75294]: 4.2 scrub ok
Jan 31 08:02:22 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 31 08:02:22 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 31 08:02:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 08:02:23 compute-0 ceph-mon[75294]: pgmap v274: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 08:02:24 compute-0 ceph-mon[75294]: 7.18 scrub starts
Jan 31 08:02:24 compute-0 ceph-mon[75294]: 7.18 scrub ok
Jan 31 08:02:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Jan 31 08:02:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 31 08:02:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 08:02:25 compute-0 ceph-mon[75294]: pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Jan 31 08:02:25 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 08:02:25 compute-0 sshd-session[102445]: Invalid user ubuntu from 193.32.162.145 port 58136
Jan 31 08:02:25 compute-0 sshd-session[102445]: Connection closed by invalid user ubuntu 193.32.162.145 port 58136 [preauth]
Jan 31 08:02:25 compute-0 sudo[102447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:02:25 compute-0 sudo[102447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:25 compute-0 sudo[102447]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:25 compute-0 sudo[102474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:02:25 compute-0 sudo[102474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 08:02:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:25 compute-0 sudo[102474]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:02:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:02:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:02:25 compute-0 sudo[102536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:02:25 compute-0 sudo[102536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:25 compute-0 sudo[102536]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:26 compute-0 sudo[102561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:02:26 compute-0 sudo[102561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 08:02:26 compute-0 ceph-mon[75294]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 08:02:26 compute-0 ceph-mon[75294]: 5.f scrub starts
Jan 31 08:02:26 compute-0 ceph-mon[75294]: 5.f scrub ok
Jan 31 08:02:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:02:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:02:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:02:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:02:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:02:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:02:26 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 31 08:02:26 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 31 08:02:26 compute-0 podman[102597]: 2026-01-31 08:02:26.298105049 +0000 UTC m=+0.044614240 container create 16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:02:26 compute-0 systemd[76681]: Created slice User Background Tasks Slice.
Jan 31 08:02:26 compute-0 systemd[76681]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 08:02:26 compute-0 systemd[76681]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 08:02:26 compute-0 systemd[1]: Started libpod-conmon-16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5.scope.
Jan 31 08:02:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:26 compute-0 podman[102597]: 2026-01-31 08:02:26.27708588 +0000 UTC m=+0.023595071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:02:26 compute-0 podman[102597]: 2026-01-31 08:02:26.387124111 +0000 UTC m=+0.133633292 container init 16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:02:26 compute-0 podman[102597]: 2026-01-31 08:02:26.392216819 +0000 UTC m=+0.138725980 container start 16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:02:26 compute-0 podman[102597]: 2026-01-31 08:02:26.396490641 +0000 UTC m=+0.142999802 container attach 16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:02:26 compute-0 peaceful_torvalds[102615]: 167 167
Jan 31 08:02:26 compute-0 systemd[1]: libpod-16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5.scope: Deactivated successfully.
Jan 31 08:02:26 compute-0 podman[102597]: 2026-01-31 08:02:26.398540104 +0000 UTC m=+0.145049265 container died 16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:02:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-632b8b9ca18f872fcfba18f1f1a8bfec34997c3c888803994ea860000391fa8e-merged.mount: Deactivated successfully.
Jan 31 08:02:26 compute-0 podman[102597]: 2026-01-31 08:02:26.449126848 +0000 UTC m=+0.195636019 container remove 16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:02:26 compute-0 systemd[1]: libpod-conmon-16c4950515053c2c12bd9493dcedc31684a7e177ca885f9798c653b0acbdb3c5.scope: Deactivated successfully.
Jan 31 08:02:26 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 31 08:02:26 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 31 08:02:26 compute-0 podman[102639]: 2026-01-31 08:02:26.609746335 +0000 UTC m=+0.051798463 container create 76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:02:26 compute-0 systemd[1]: Started libpod-conmon-76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4.scope.
Jan 31 08:02:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75525ac55b13abf2c506650c2ba878b07bea8202649c6e0c7abfbc5b04d48e28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75525ac55b13abf2c506650c2ba878b07bea8202649c6e0c7abfbc5b04d48e28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75525ac55b13abf2c506650c2ba878b07bea8202649c6e0c7abfbc5b04d48e28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75525ac55b13abf2c506650c2ba878b07bea8202649c6e0c7abfbc5b04d48e28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75525ac55b13abf2c506650c2ba878b07bea8202649c6e0c7abfbc5b04d48e28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:26 compute-0 podman[102639]: 2026-01-31 08:02:26.589419837 +0000 UTC m=+0.031471985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:02:26 compute-0 podman[102639]: 2026-01-31 08:02:26.70953494 +0000 UTC m=+0.151587128 container init 76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:02:26 compute-0 podman[102639]: 2026-01-31 08:02:26.71728187 +0000 UTC m=+0.159333998 container start 76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:02:26 compute-0 podman[102639]: 2026-01-31 08:02:26.728351623 +0000 UTC m=+0.170403781 container attach 76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:02:26 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 08:02:26 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 08:02:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 31 08:02:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 08:02:27 compute-0 charming_fermi[102655]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:02:27 compute-0 charming_fermi[102655]: --> All data devices are unavailable
Jan 31 08:02:27 compute-0 systemd[1]: libpod-76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4.scope: Deactivated successfully.
Jan 31 08:02:27 compute-0 podman[102639]: 2026-01-31 08:02:27.110853009 +0000 UTC m=+0.552905137 container died 76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:02:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 08:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-75525ac55b13abf2c506650c2ba878b07bea8202649c6e0c7abfbc5b04d48e28-merged.mount: Deactivated successfully.
Jan 31 08:02:27 compute-0 ceph-mon[75294]: 6.0 scrub starts
Jan 31 08:02:27 compute-0 ceph-mon[75294]: 6.0 scrub ok
Jan 31 08:02:27 compute-0 ceph-mon[75294]: 5.19 scrub starts
Jan 31 08:02:27 compute-0 ceph-mon[75294]: 5.19 scrub ok
Jan 31 08:02:27 compute-0 ceph-mon[75294]: pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 08:02:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 08:02:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 08:02:27 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 08:02:27 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 119 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=119 pruub=14.227397919s) [2] r=-1 lpr=119 pi=[83,119)/1 crt=71'551 active pruub 259.390563965s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:27 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 119 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=119 pruub=14.227314949s) [2] r=-1 lpr=119 pi=[83,119)/1 crt=71'551 unknown NOTIFY pruub 259.390563965s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:27 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 119 pg[9.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=119) [2] r=0 lpr=119 pi=[83,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:27 compute-0 podman[102639]: 2026-01-31 08:02:27.185865319 +0000 UTC m=+0.627917447 container remove 76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:02:27 compute-0 systemd[1]: libpod-conmon-76b833e5c2b334fa7bd97c617fe630d692465ab1e5d223760e85e1fa1967b8b4.scope: Deactivated successfully.
Jan 31 08:02:27 compute-0 sudo[102561]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:27 compute-0 sudo[102688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:02:27 compute-0 sudo[102688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:27 compute-0 sudo[102688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:27 compute-0 sudo[102713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:02:27 compute-0 sudo[102713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:27 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 31 08:02:27 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 31 08:02:27 compute-0 podman[102750]: 2026-01-31 08:02:27.61120558 +0000 UTC m=+0.040559975 container create ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:02:27 compute-0 systemd[1]: Started libpod-conmon-ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea.scope.
Jan 31 08:02:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:27 compute-0 podman[102750]: 2026-01-31 08:02:27.590400678 +0000 UTC m=+0.019755103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:02:27 compute-0 podman[102750]: 2026-01-31 08:02:27.692978589 +0000 UTC m=+0.122333004 container init ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:02:27 compute-0 podman[102750]: 2026-01-31 08:02:27.698135439 +0000 UTC m=+0.127489834 container start ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:02:27 compute-0 goofy_bardeen[102766]: 167 167
Jan 31 08:02:27 compute-0 systemd[1]: libpod-ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea.scope: Deactivated successfully.
Jan 31 08:02:27 compute-0 podman[102750]: 2026-01-31 08:02:27.703491874 +0000 UTC m=+0.132846269 container attach ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:02:27 compute-0 podman[102750]: 2026-01-31 08:02:27.704117844 +0000 UTC m=+0.133472249 container died ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-515c044bb772a88fd1bff8d6194c03516b730764b72e57f343e1b73740eb3e23-merged.mount: Deactivated successfully.
Jan 31 08:02:27 compute-0 podman[102750]: 2026-01-31 08:02:27.757403711 +0000 UTC m=+0.186758096 container remove ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bardeen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:02:27 compute-0 systemd[1]: libpod-conmon-ba21494522e441bebf0c7b5640998bb01344e0faa55e8bb4b647eb0134f52aea.scope: Deactivated successfully.
Jan 31 08:02:27 compute-0 podman[102794]: 2026-01-31 08:02:27.902339482 +0000 UTC m=+0.058196830 container create 6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:02:27 compute-0 podman[102794]: 2026-01-31 08:02:27.867702642 +0000 UTC m=+0.023560030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:02:27 compute-0 systemd[1]: Started libpod-conmon-6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c.scope.
Jan 31 08:02:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e98611597a0a0b36c8a7e4992f8a155ac7de1fade609a55257c84fec52d45f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e98611597a0a0b36c8a7e4992f8a155ac7de1fade609a55257c84fec52d45f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e98611597a0a0b36c8a7e4992f8a155ac7de1fade609a55257c84fec52d45f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e98611597a0a0b36c8a7e4992f8a155ac7de1fade609a55257c84fec52d45f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:28 compute-0 podman[102794]: 2026-01-31 08:02:28.030817975 +0000 UTC m=+0.186675353 container init 6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:02:28 compute-0 podman[102794]: 2026-01-31 08:02:28.036009936 +0000 UTC m=+0.191867284 container start 6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:02:28 compute-0 podman[102794]: 2026-01-31 08:02:28.042729004 +0000 UTC m=+0.198586352 container attach 6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:02:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 08:02:28 compute-0 ceph-mon[75294]: 7.2 scrub starts
Jan 31 08:02:28 compute-0 ceph-mon[75294]: 7.2 scrub ok
Jan 31 08:02:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 08:02:28 compute-0 ceph-mon[75294]: osdmap e119: 3 total, 3 up, 3 in
Jan 31 08:02:28 compute-0 ceph-mon[75294]: 5.18 scrub starts
Jan 31 08:02:28 compute-0 ceph-mon[75294]: 5.18 scrub ok
Jan 31 08:02:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 08:02:28 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 08:02:28 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 120 pg[9.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=120) [2]/[0] r=-1 lpr=120 pi=[83,120)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:28 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 120 pg[9.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=120) [2]/[0] r=-1 lpr=120 pi=[83,120)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:28 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 120 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=120) [2]/[0] r=0 lpr=120 pi=[83,120)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:28 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 120 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=83/84 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=120) [2]/[0] r=0 lpr=120 pi=[83,120)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:28 compute-0 youthful_einstein[102810]: {
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:     "0": [
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:         {
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "devices": [
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "/dev/loop3"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             ],
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_name": "ceph_lv0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_size": "21470642176",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "name": "ceph_lv0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "tags": {
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cluster_name": "ceph",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.crush_device_class": "",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.encrypted": "0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.objectstore": "bluestore",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osd_id": "0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.type": "block",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.vdo": "0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.with_tpm": "0"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             },
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "type": "block",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "vg_name": "ceph_vg0"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:         }
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:     ],
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:     "1": [
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:         {
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "devices": [
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "/dev/loop4"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             ],
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_name": "ceph_lv1",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_size": "21470642176",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "name": "ceph_lv1",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "tags": {
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cluster_name": "ceph",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.crush_device_class": "",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.encrypted": "0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.objectstore": "bluestore",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osd_id": "1",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.type": "block",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.vdo": "0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.with_tpm": "0"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             },
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "type": "block",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "vg_name": "ceph_vg1"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:         }
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:     ],
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:     "2": [
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:         {
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "devices": [
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "/dev/loop5"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             ],
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_name": "ceph_lv2",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_size": "21470642176",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "name": "ceph_lv2",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "tags": {
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.cluster_name": "ceph",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.crush_device_class": "",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.encrypted": "0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.objectstore": "bluestore",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osd_id": "2",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.type": "block",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.vdo": "0",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:                 "ceph.with_tpm": "0"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             },
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "type": "block",
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:             "vg_name": "ceph_vg2"
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:         }
Jan 31 08:02:28 compute-0 youthful_einstein[102810]:     ]
Jan 31 08:02:28 compute-0 youthful_einstein[102810]: }
Jan 31 08:02:28 compute-0 systemd[1]: libpod-6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c.scope: Deactivated successfully.
Jan 31 08:02:28 compute-0 podman[102794]: 2026-01-31 08:02:28.336444205 +0000 UTC m=+0.492301573 container died 6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0e98611597a0a0b36c8a7e4992f8a155ac7de1fade609a55257c84fec52d45f-merged.mount: Deactivated successfully.
Jan 31 08:02:28 compute-0 podman[102794]: 2026-01-31 08:02:28.411851427 +0000 UTC m=+0.567708775 container remove 6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:02:28 compute-0 systemd[1]: libpod-conmon-6e0c33e07c735bd61efef99e44ad9973e8e241faf684aae83ed9d217a1f26c6c.scope: Deactivated successfully.
Jan 31 08:02:28 compute-0 sudo[102713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:28 compute-0 sudo[102830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:02:28 compute-0 sudo[102830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:28 compute-0 sudo[102830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:28 compute-0 sudo[102855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:02:28 compute-0 sudo[102855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 31 08:02:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 08:02:28 compute-0 podman[102892]: 2026-01-31 08:02:28.83528134 +0000 UTC m=+0.044501187 container create c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ritchie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:02:28 compute-0 systemd[1]: Started libpod-conmon-c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af.scope.
Jan 31 08:02:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:28 compute-0 podman[102892]: 2026-01-31 08:02:28.900580989 +0000 UTC m=+0.109800856 container init c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:02:28 compute-0 podman[102892]: 2026-01-31 08:02:28.813548328 +0000 UTC m=+0.022768185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:02:28 compute-0 podman[102892]: 2026-01-31 08:02:28.909763313 +0000 UTC m=+0.118983150 container start c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ritchie, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:02:28 compute-0 podman[102892]: 2026-01-31 08:02:28.914793258 +0000 UTC m=+0.124013145 container attach c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:02:28 compute-0 stoic_ritchie[102908]: 167 167
Jan 31 08:02:28 compute-0 systemd[1]: libpod-c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af.scope: Deactivated successfully.
Jan 31 08:02:28 compute-0 podman[102892]: 2026-01-31 08:02:28.917960777 +0000 UTC m=+0.127180614 container died c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ritchie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-344ddc04f8c386429464937997562537202c6f0d7e871009b4b5084b5c4f1d96-merged.mount: Deactivated successfully.
Jan 31 08:02:28 compute-0 podman[102892]: 2026-01-31 08:02:28.957896021 +0000 UTC m=+0.167115858 container remove c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:02:28 compute-0 systemd[1]: libpod-conmon-c98ae617dbd66222a43c28d5b5bba7a9bbc46c2605e322937d3bf292836c11af.scope: Deactivated successfully.
Jan 31 08:02:29 compute-0 podman[102933]: 2026-01-31 08:02:29.090316056 +0000 UTC m=+0.041196206 container create 32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:02:29 compute-0 systemd[1]: Started libpod-conmon-32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd.scope.
Jan 31 08:02:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8466570259e1c0bcf3d9862d96eef47c202eb59268ebb1bce5235e09670e3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8466570259e1c0bcf3d9862d96eef47c202eb59268ebb1bce5235e09670e3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8466570259e1c0bcf3d9862d96eef47c202eb59268ebb1bce5235e09670e3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8466570259e1c0bcf3d9862d96eef47c202eb59268ebb1bce5235e09670e3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:29 compute-0 podman[102933]: 2026-01-31 08:02:29.072061301 +0000 UTC m=+0.022941471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:02:29 compute-0 podman[102933]: 2026-01-31 08:02:29.174766507 +0000 UTC m=+0.125646677 container init 32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rosalind, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:02:29 compute-0 podman[102933]: 2026-01-31 08:02:29.182934529 +0000 UTC m=+0.133814689 container start 32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rosalind, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:02:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 08:02:29 compute-0 podman[102933]: 2026-01-31 08:02:29.187594153 +0000 UTC m=+0.138474323 container attach 32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rosalind, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:02:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 08:02:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 08:02:29 compute-0 ceph-mon[75294]: osdmap e120: 3 total, 3 up, 3 in
Jan 31 08:02:29 compute-0 ceph-mon[75294]: pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 08:02:29 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 08:02:29 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 121 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=120/121 n=6 ec=75/43 lis/c=83/83 les/c/f=84/84/0 sis=120) [2]/[0] async=[2] r=0 lpr=120 pi=[83,120)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:29 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 31 08:02:29 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 31 08:02:29 compute-0 lvm[103025]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:02:29 compute-0 lvm[103025]: VG ceph_vg0 finished
Jan 31 08:02:29 compute-0 lvm[103028]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:02:29 compute-0 lvm[103028]: VG ceph_vg1 finished
Jan 31 08:02:29 compute-0 lvm[103030]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:02:29 compute-0 lvm[103030]: VG ceph_vg2 finished
Jan 31 08:02:29 compute-0 determined_rosalind[102949]: {}
Jan 31 08:02:29 compute-0 podman[102933]: 2026-01-31 08:02:29.940763822 +0000 UTC m=+0.891643982 container died 32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rosalind, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:02:29 compute-0 systemd[1]: libpod-32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd.scope: Deactivated successfully.
Jan 31 08:02:29 compute-0 systemd[1]: libpod-32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd.scope: Consumed 1.084s CPU time.
Jan 31 08:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e8466570259e1c0bcf3d9862d96eef47c202eb59268ebb1bce5235e09670e3f-merged.mount: Deactivated successfully.
Jan 31 08:02:30 compute-0 podman[102933]: 2026-01-31 08:02:30.00410732 +0000 UTC m=+0.954987470 container remove 32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rosalind, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:02:30 compute-0 systemd[1]: libpod-conmon-32b174763d5aaaf1a434b9f932ab10e13c7d0fc99ff14b14b9e26cf4c486b0fd.scope: Deactivated successfully.
Jan 31 08:02:30 compute-0 sudo[102855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:02:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:02:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:02:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:02:30 compute-0 sudo[103044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:02:30 compute-0 sudo[103044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:30 compute-0 sudo[103044]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 31 08:02:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 08:02:30 compute-0 ceph-mon[75294]: osdmap e121: 3 total, 3 up, 3 in
Jan 31 08:02:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:02:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:02:30 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 08:02:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 08:02:30 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 31 08:02:30 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 122 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=120/121 n=6 ec=75/43 lis/c=120/83 les/c/f=121/84/0 sis=122 pruub=14.996467590s) [2] async=[2] r=-1 lpr=122 pi=[83,122)/1 crt=71'551 active pruub 263.217895508s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:30 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 122 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=120/121 n=6 ec=75/43 lis/c=120/83 les/c/f=121/84/0 sis=122 pruub=14.996337891s) [2] r=-1 lpr=122 pi=[83,122)/1 crt=71'551 unknown NOTIFY pruub 263.217895508s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:30 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 08:02:30 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 122 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=120/83 les/c/f=121/84/0 sis=122) [2] r=0 lpr=122 pi=[83,122)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:30 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 122 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=120/83 les/c/f=121/84/0 sis=122) [2] r=0 lpr=122 pi=[83,122)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 31 08:02:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 08:02:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 31 08:02:31 compute-0 ceph-mon[75294]: 7.1 scrub starts
Jan 31 08:02:31 compute-0 ceph-mon[75294]: 7.1 scrub ok
Jan 31 08:02:31 compute-0 ceph-mon[75294]: 6.3 scrub starts
Jan 31 08:02:31 compute-0 ceph-mon[75294]: osdmap e122: 3 total, 3 up, 3 in
Jan 31 08:02:31 compute-0 ceph-mon[75294]: 6.3 scrub ok
Jan 31 08:02:31 compute-0 ceph-mon[75294]: pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 08:02:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 08:02:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 08:02:31 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 31 08:02:31 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 123 pg[9.19( v 71'551 (0'0,71'551] local-lis/les=122/123 n=6 ec=75/43 lis/c=120/83 les/c/f=121/84/0 sis=122) [2] r=0 lpr=122 pi=[83,122)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:31 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 08:02:31 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 08:02:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 08:02:32 compute-0 ceph-mon[75294]: osdmap e123: 3 total, 3 up, 3 in
Jan 31 08:02:32 compute-0 ceph-mon[75294]: 2.6 scrub starts
Jan 31 08:02:32 compute-0 ceph-mon[75294]: 2.6 scrub ok
Jan 31 08:02:32 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 31 08:02:32 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 31 08:02:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 31 08:02:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 08:02:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 31 08:02:33 compute-0 ceph-mon[75294]: 5.1a scrub starts
Jan 31 08:02:33 compute-0 ceph-mon[75294]: 5.1a scrub ok
Jan 31 08:02:33 compute-0 ceph-mon[75294]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:33 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 08:02:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 08:02:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 08:02:33 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 31 08:02:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 124 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=103/104 n=6 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=124 pruub=13.303961754s) [0] r=-1 lpr=124 pi=[103,124)/1 crt=71'551 active pruub 256.332641602s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:33 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 124 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=103/104 n=6 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=124 pruub=13.303911209s) [0] r=-1 lpr=124 pi=[103,124)/1 crt=71'551 unknown NOTIFY pruub 256.332641602s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:33 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 124 pg[9.1c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=124) [0] r=0 lpr=124 pi=[103,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:33 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 08:02:33 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 08:02:33 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 08:02:33 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 08:02:34 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 08:02:34 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 08:02:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 31 08:02:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 08:02:34 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 31 08:02:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 125 pg[9.1c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[103,125)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:34 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 125 pg[9.1c( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[103,125)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 08:02:34 compute-0 ceph-mon[75294]: osdmap e124: 3 total, 3 up, 3 in
Jan 31 08:02:34 compute-0 ceph-mon[75294]: 6.4 scrub starts
Jan 31 08:02:34 compute-0 ceph-mon[75294]: 6.4 scrub ok
Jan 31 08:02:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 125 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=103/104 n=6 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=125) [0]/[2] r=0 lpr=125 pi=[103,125)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:34 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 125 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=103/104 n=6 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=125) [0]/[2] r=0 lpr=125 pi=[103,125)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:34 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 31 08:02:34 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 31 08:02:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Jan 31 08:02:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 31 08:02:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 08:02:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 31 08:02:35 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 08:02:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 08:02:35 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 31 08:02:35 compute-0 ceph-mon[75294]: 3.7 scrub starts
Jan 31 08:02:35 compute-0 ceph-mon[75294]: 3.7 scrub ok
Jan 31 08:02:35 compute-0 ceph-mon[75294]: 6.7 scrub starts
Jan 31 08:02:35 compute-0 ceph-mon[75294]: 6.7 scrub ok
Jan 31 08:02:35 compute-0 ceph-mon[75294]: osdmap e125: 3 total, 3 up, 3 in
Jan 31 08:02:35 compute-0 ceph-mon[75294]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Jan 31 08:02:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 08:02:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 126 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=125/126 n=6 ec=75/43 lis/c=103/103 les/c/f=104/104/0 sis=125) [0]/[2] async=[0] r=0 lpr=125 pi=[103,125)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 08:02:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 08:02:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 31 08:02:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 08:02:35 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 31 08:02:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 127 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=125/126 n=6 ec=75/43 lis/c=125/103 les/c/f=126/104/0 sis=127 pruub=15.465678215s) [0] async=[0] r=-1 lpr=127 pi=[103,127)/1 crt=71'551 active pruub 261.068908691s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:35 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 127 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=125/126 n=6 ec=75/43 lis/c=125/103 les/c/f=126/104/0 sis=127 pruub=15.465600014s) [0] r=-1 lpr=127 pi=[103,127)/1 crt=71'551 unknown NOTIFY pruub 261.068908691s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:35 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 127 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=125/103 les/c/f=126/104/0 sis=127) [0] r=0 lpr=127 pi=[103,127)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:35 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 127 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=125/103 les/c/f=126/104/0 sis=127) [0] r=0 lpr=127 pi=[103,127)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:36 compute-0 ceph-mon[75294]: 3.5 scrub starts
Jan 31 08:02:36 compute-0 ceph-mon[75294]: 3.5 scrub ok
Jan 31 08:02:36 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 08:02:36 compute-0 ceph-mon[75294]: osdmap e126: 3 total, 3 up, 3 in
Jan 31 08:02:36 compute-0 ceph-mon[75294]: 4.e scrub starts
Jan 31 08:02:36 compute-0 ceph-mon[75294]: 4.e scrub ok
Jan 31 08:02:36 compute-0 ceph-mon[75294]: osdmap e127: 3 total, 3 up, 3 in
Jan 31 08:02:36 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 08:02:36 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 08:02:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 86 B/s, 1 objects/s recovering
Jan 31 08:02:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 31 08:02:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 08:02:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 31 08:02:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 08:02:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 08:02:36 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 31 08:02:37 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 128 pg[9.1c( v 71'551 (0'0,71'551] local-lis/les=127/128 n=6 ec=75/43 lis/c=125/103 les/c/f=126/104/0 sis=127) [0] r=0 lpr=127 pi=[103,127)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:37 compute-0 ceph-mon[75294]: 7.e scrub starts
Jan 31 08:02:37 compute-0 ceph-mon[75294]: 7.e scrub ok
Jan 31 08:02:37 compute-0 ceph-mon[75294]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 86 B/s, 1 objects/s recovering
Jan 31 08:02:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 08:02:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 08:02:37 compute-0 ceph-mon[75294]: osdmap e128: 3 total, 3 up, 3 in
Jan 31 08:02:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 31 08:02:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 31 08:02:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 128 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=128 pruub=9.914005280s) [0] r=-1 lpr=128 pi=[91,128)/1 crt=71'551 active pruub 257.304962158s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:37 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 128 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=128 pruub=9.913968086s) [0] r=-1 lpr=128 pi=[91,128)/1 crt=71'551 unknown NOTIFY pruub 257.304962158s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:37 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=128) [0] r=0 lpr=128 pi=[91,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 31 08:02:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 08:02:38 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 31 08:02:38 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 129 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=129) [0]/[2] r=0 lpr=129 pi=[91,129)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:38 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 129 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=91/92 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=129) [0]/[2] r=0 lpr=129 pi=[91,129)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:38 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 08:02:38 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[91,129)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:38 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[91,129)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:38 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 08:02:38 compute-0 ceph-mon[75294]: 6.b scrub starts
Jan 31 08:02:38 compute-0 ceph-mon[75294]: 6.b scrub ok
Jan 31 08:02:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:39 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 31 08:02:39 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 31 08:02:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 31 08:02:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 08:02:39 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 31 08:02:40 compute-0 ceph-mon[75294]: osdmap e129: 3 total, 3 up, 3 in
Jan 31 08:02:40 compute-0 ceph-mon[75294]: 4.1b scrub starts
Jan 31 08:02:40 compute-0 ceph-mon[75294]: 4.1b scrub ok
Jan 31 08:02:40 compute-0 ceph-mon[75294]: pgmap v294: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:40 compute-0 ceph-mon[75294]: osdmap e130: 3 total, 3 up, 3 in
Jan 31 08:02:40 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 08:02:40 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 08:02:40 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 130 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=129/130 n=6 ec=75/43 lis/c=91/91 les/c/f=92/92/0 sis=129) [0]/[2] async=[0] r=0 lpr=129 pi=[91,129)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:40 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 31 08:02:40 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 31 08:02:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 31 08:02:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 08:02:40 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 31 08:02:40 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 131 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=129/91 les/c/f=130/92/0 sis=131) [0] r=0 lpr=131 pi=[91,131)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:40 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 131 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=129/91 les/c/f=130/92/0 sis=131) [0] r=0 lpr=131 pi=[91,131)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:40 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 131 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=129/130 n=6 ec=75/43 lis/c=129/91 les/c/f=130/92/0 sis=131 pruub=15.495388031s) [0] async=[0] r=-1 lpr=131 pi=[91,131)/1 crt=71'551 active pruub 266.099395752s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:40 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 131 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=129/130 n=6 ec=75/43 lis/c=129/91 les/c/f=130/92/0 sis=131 pruub=15.495306969s) [0] r=-1 lpr=131 pi=[91,131)/1 crt=71'551 unknown NOTIFY pruub 266.099395752s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:41 compute-0 ceph-mon[75294]: 6.9 scrub starts
Jan 31 08:02:41 compute-0 ceph-mon[75294]: 6.9 scrub ok
Jan 31 08:02:41 compute-0 ceph-mon[75294]: 6.5 scrub starts
Jan 31 08:02:41 compute-0 ceph-mon[75294]: 6.5 scrub ok
Jan 31 08:02:41 compute-0 ceph-mon[75294]: 6.1 scrub starts
Jan 31 08:02:41 compute-0 ceph-mon[75294]: 6.1 scrub ok
Jan 31 08:02:41 compute-0 ceph-mon[75294]: pgmap v296: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:41 compute-0 ceph-mon[75294]: osdmap e131: 3 total, 3 up, 3 in
Jan 31 08:02:41 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 31 08:02:41 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 31 08:02:41 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 08:02:41 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 08:02:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 31 08:02:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 08:02:41 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 08:02:41 compute-0 ceph-osd[85864]: osd.0 pg_epoch: 132 pg[9.1e( v 71'551 (0'0,71'551] local-lis/les=131/132 n=6 ec=75/43 lis/c=129/91 les/c/f=130/92/0 sis=131) [0] r=0 lpr=131 pi=[91,131)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:42 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 31 08:02:42 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 31 08:02:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:42 compute-0 ceph-mon[75294]: 6.a scrub starts
Jan 31 08:02:42 compute-0 ceph-mon[75294]: 6.a scrub ok
Jan 31 08:02:42 compute-0 ceph-mon[75294]: 6.e scrub starts
Jan 31 08:02:42 compute-0 ceph-mon[75294]: 6.e scrub ok
Jan 31 08:02:42 compute-0 ceph-mon[75294]: osdmap e132: 3 total, 3 up, 3 in
Jan 31 08:02:43 compute-0 sudo[102306]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:43 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 08:02:43 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 08:02:43 compute-0 sudo[103218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuvayzonebpzbuqjuxqnwtusoramyjvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846563.506376-132-102275556135256/AnsiballZ_command.py'
Jan 31 08:02:43 compute-0 sudo[103218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:43 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 31 08:02:43 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 31 08:02:43 compute-0 ceph-mon[75294]: 6.6 scrub starts
Jan 31 08:02:43 compute-0 ceph-mon[75294]: 6.6 scrub ok
Jan 31 08:02:43 compute-0 ceph-mon[75294]: pgmap v299: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:43 compute-0 python3.9[103220]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:02:44 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 31 08:02:44 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 31 08:02:44 compute-0 sudo[103218]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 170 B/s wr, 7 op/s; 20 B/s, 0 objects/s recovering
Jan 31 08:02:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:02:44 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:02:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 31 08:02:44 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:02:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 08:02:44 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 31 08:02:44 compute-0 ceph-mon[75294]: 6.2 scrub starts
Jan 31 08:02:44 compute-0 ceph-mon[75294]: 6.2 scrub ok
Jan 31 08:02:44 compute-0 ceph-mon[75294]: 4.1a scrub starts
Jan 31 08:02:44 compute-0 ceph-mon[75294]: 4.1a scrub ok
Jan 31 08:02:44 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:02:45 compute-0 sudo[103505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjggknvbaqclururgqaiqbpnzrducqpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846564.7236211-140-203598836731866/AnsiballZ_selinux.py'
Jan 31 08:02:45 compute-0 sudo[103505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:45 compute-0 python3.9[103507]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 08:02:45 compute-0 sudo[103505]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:45 compute-0 ceph-mon[75294]: 8.1f scrub starts
Jan 31 08:02:45 compute-0 ceph-mon[75294]: 8.1f scrub ok
Jan 31 08:02:45 compute-0 ceph-mon[75294]: pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 170 B/s wr, 7 op/s; 20 B/s, 0 objects/s recovering
Jan 31 08:02:45 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:02:45 compute-0 ceph-mon[75294]: osdmap e133: 3 total, 3 up, 3 in
Jan 31 08:02:46 compute-0 sudo[103657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocilshmyjbjagiwocfgrbgxoojdilbii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846565.9840689-151-1122153230343/AnsiballZ_command.py'
Jan 31 08:02:46 compute-0 sudo[103657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:46 compute-0 python3.9[103659]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 08:02:46 compute-0 sudo[103657]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:46 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 08:02:46 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 08:02:46 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 133 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=93/94 n=6 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=133 pruub=10.889122963s) [1] r=-1 lpr=133 pi=[93,133)/1 crt=71'551 active pruub 267.237823486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:46 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 133 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=93/94 n=6 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=133 pruub=10.889084816s) [1] r=-1 lpr=133 pi=[93,133)/1 crt=71'551 unknown NOTIFY pruub 267.237823486s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:46 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=133) [1] r=0 lpr=133 pi=[93,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:46 compute-0 sudo[103809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkzlyvbhworfpdvtiduphtrgstvcfmbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846566.595024-159-99240885175074/AnsiballZ_file.py'
Jan 31 08:02:46 compute-0 sudo[103809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 170 B/s wr, 7 op/s; 20 B/s, 0 objects/s recovering
Jan 31 08:02:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 31 08:02:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 08:02:46 compute-0 ceph-mon[75294]: 6.d scrub starts
Jan 31 08:02:46 compute-0 ceph-mon[75294]: 6.d scrub ok
Jan 31 08:02:46 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 31 08:02:46 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=134) [1]/[2] r=-1 lpr=134 pi=[93,134)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:46 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=134) [1]/[2] r=-1 lpr=134 pi=[93,134)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:46 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 134 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=93/94 n=6 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=134) [1]/[2] r=0 lpr=134 pi=[93,134)/1 crt=71'551 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:46 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 134 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=93/94 n=6 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=134) [1]/[2] r=0 lpr=134 pi=[93,134)/1 crt=71'551 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:47 compute-0 python3.9[103811]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:47 compute-0 sudo[103809]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:47 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 08:02:47 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 08:02:47 compute-0 sudo[103961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwdauuocwndypkvudsdienuezupviaug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846567.1776736-167-123072979486652/AnsiballZ_mount.py'
Jan 31 08:02:47 compute-0 sudo[103961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:47 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 31 08:02:47 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 31 08:02:47 compute-0 python3.9[103963]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 08:02:47 compute-0 sudo[103961]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 31 08:02:47 compute-0 ceph-mon[75294]: pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 170 B/s wr, 7 op/s; 20 B/s, 0 objects/s recovering
Jan 31 08:02:47 compute-0 ceph-mon[75294]: osdmap e134: 3 total, 3 up, 3 in
Jan 31 08:02:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 08:02:47 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 31 08:02:48 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 135 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=134/135 n=6 ec=75/43 lis/c=93/93 les/c/f=94/94/0 sis=134) [1]/[2] async=[1] r=0 lpr=134 pi=[93,134)/1 crt=71'551 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:48 compute-0 sudo[104113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqntvzylldwkrwvtktzmtqozsqxdbeda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846568.4838622-195-99681080372444/AnsiballZ_file.py'
Jan 31 08:02:48 compute-0 sudo[104113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:48 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 31 08:02:48 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 31 08:02:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 7 op/s; 3 B/s, 1 objects/s recovering
Jan 31 08:02:48 compute-0 python3.9[104115]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:02:48 compute-0 sudo[104113]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 31 08:02:49 compute-0 ceph-mon[75294]: 10.16 scrub starts
Jan 31 08:02:49 compute-0 ceph-mon[75294]: 10.16 scrub ok
Jan 31 08:02:49 compute-0 ceph-mon[75294]: 7.1a scrub starts
Jan 31 08:02:49 compute-0 ceph-mon[75294]: 7.1a scrub ok
Jan 31 08:02:49 compute-0 ceph-mon[75294]: osdmap e135: 3 total, 3 up, 3 in
Jan 31 08:02:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 08:02:49 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 31 08:02:49 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 136 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=134/135 n=6 ec=75/43 lis/c=134/93 les/c/f=135/94/0 sis=136 pruub=14.984798431s) [1] async=[1] r=-1 lpr=136 pi=[93,136)/1 crt=71'551 active pruub 273.750061035s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:49 compute-0 ceph-osd[88061]: osd.2 pg_epoch: 136 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=134/135 n=6 ec=75/43 lis/c=134/93 les/c/f=135/94/0 sis=136 pruub=14.984746933s) [1] r=-1 lpr=136 pi=[93,136)/1 crt=71'551 unknown NOTIFY pruub 273.750061035s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:02:49 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 136 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=134/93 les/c/f=135/94/0 sis=136) [1] r=0 lpr=136 pi=[93,136)/1 pct=0'0 crt=71'551 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:02:49 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 136 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=0/0 n=6 ec=75/43 lis/c=134/93 les/c/f=135/94/0 sis=136) [1] r=0 lpr=136 pi=[93,136)/1 crt=71'551 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:02:49 compute-0 sudo[104266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iouduliyjnmmuactppdsesvjrfoyevfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846569.0713115-203-6941215722526/AnsiballZ_stat.py'
Jan 31 08:02:49 compute-0 sudo[104266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:49 compute-0 python3.9[104268]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:49 compute-0 sudo[104266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:49 compute-0 sudo[104344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erakmxkbonsmwrusevcyfbmrzowgrxln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846569.0713115-203-6941215722526/AnsiballZ_file.py'
Jan 31 08:02:49 compute-0 sudo[104344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:49 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 31 08:02:49 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 31 08:02:49 compute-0 python3.9[104346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:49 compute-0 sudo[104344]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 31 08:02:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 08:02:50 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 31 08:02:50 compute-0 ceph-mon[75294]: 7.c scrub starts
Jan 31 08:02:50 compute-0 ceph-mon[75294]: 7.c scrub ok
Jan 31 08:02:50 compute-0 ceph-mon[75294]: pgmap v305: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 7 op/s; 3 B/s, 1 objects/s recovering
Jan 31 08:02:50 compute-0 ceph-mon[75294]: osdmap e136: 3 total, 3 up, 3 in
Jan 31 08:02:50 compute-0 ceph-osd[86929]: osd.1 pg_epoch: 137 pg[9.1f( v 71'551 (0'0,71'551] local-lis/les=136/137 n=6 ec=75/43 lis/c=134/93 les/c/f=135/94/0 sis=136) [1] r=0 lpr=136 pi=[93,136)/1 crt=71'551 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:02:50 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 31 08:02:50 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 31 08:02:50 compute-0 sudo[104496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpbpyvfnrqqvrsliivrauhwmfwyxdtuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846570.2824514-224-37359667697343/AnsiballZ_stat.py'
Jan 31 08:02:50 compute-0 sudo[104496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:50 compute-0 python3.9[104498]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:02:50 compute-0 sudo[104496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:02:50
Jan 31 08:02:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:02:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:02:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Jan 31 08:02:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:02:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:51 compute-0 ceph-mon[75294]: 3.1d scrub starts
Jan 31 08:02:51 compute-0 ceph-mon[75294]: 3.1d scrub ok
Jan 31 08:02:51 compute-0 ceph-mon[75294]: osdmap e137: 3 total, 3 up, 3 in
Jan 31 08:02:51 compute-0 ceph-mon[75294]: pgmap v308: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:51 compute-0 sudo[104650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikaerapwnvhqktwikbrtwmhcsuhezveb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846571.0916016-237-13762971594237/AnsiballZ_getent.py'
Jan 31 08:02:51 compute-0 sudo[104650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:51 compute-0 python3.9[104652]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 08:02:51 compute-0 sudo[104650]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:52 compute-0 ceph-mon[75294]: 11.17 scrub starts
Jan 31 08:02:52 compute-0 ceph-mon[75294]: 11.17 scrub ok
Jan 31 08:02:52 compute-0 sudo[104803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpzxyllnuowpbjuegprfzlcyhvhukrnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846571.8914626-247-157085482791479/AnsiballZ_getent.py'
Jan 31 08:02:52 compute-0 sudo[104803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:52 compute-0 python3.9[104805]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 08:02:52 compute-0 sudo[104803]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:52 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 08:02:52 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 08:02:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:52 compute-0 sudo[104956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqqklwbvydhqporytvxrzzvkflsphnwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846572.486537-255-214897457246641/AnsiballZ_group.py'
Jan 31 08:02:52 compute-0 sudo[104956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:53 compute-0 python3.9[104958]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:02:53 compute-0 sudo[104956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:53 compute-0 ceph-mon[75294]: 6.c scrub starts
Jan 31 08:02:53 compute-0 ceph-mon[75294]: 6.c scrub ok
Jan 31 08:02:53 compute-0 ceph-mon[75294]: pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:53 compute-0 sudo[105108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mofmpgllelxyqvaaoguhafahcvncmtyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846573.255716-264-239570152347619/AnsiballZ_file.py'
Jan 31 08:02:53 compute-0 sudo[105108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:53 compute-0 python3.9[105110]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 08:02:53 compute-0 sudo[105108]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:54 compute-0 sudo[105260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjcnnboozthpruocouwzmcidszltdxte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846573.9615417-275-272290271088690/AnsiballZ_dnf.py'
Jan 31 08:02:54 compute-0 sudo[105260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:54 compute-0 python3.9[105262]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:02:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:54 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 08:02:54 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 08:02:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Jan 31 08:02:55 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 08:02:55 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:02:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:02:55 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 08:02:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:02:55 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 08:02:55 compute-0 ceph-mon[75294]: 6.8 scrub starts
Jan 31 08:02:55 compute-0 ceph-mon[75294]: 6.8 scrub ok
Jan 31 08:02:55 compute-0 ceph-mon[75294]: pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Jan 31 08:02:56 compute-0 sudo[105260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 31 08:02:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 31 08:02:56 compute-0 sudo[105413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egqmyurkghaejoybsmesahvjvblwxyfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846576.201617-283-214379864981050/AnsiballZ_file.py'
Jan 31 08:02:56 compute-0 sudo[105413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:56 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 31 08:02:56 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 31 08:02:56 compute-0 python3.9[105415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:02:56 compute-0 sudo[105413]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:57 compute-0 ceph-mon[75294]: 8.14 scrub starts
Jan 31 08:02:57 compute-0 ceph-mon[75294]: 8.14 scrub ok
Jan 31 08:02:57 compute-0 ceph-mon[75294]: 6.f scrub starts
Jan 31 08:02:57 compute-0 ceph-mon[75294]: 6.f scrub ok
Jan 31 08:02:57 compute-0 ceph-mon[75294]: 11.16 scrub starts
Jan 31 08:02:57 compute-0 ceph-mon[75294]: 11.16 scrub ok
Jan 31 08:02:57 compute-0 sudo[105565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwayzpktspjiswwtazjirwcmgoicukqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846576.8030324-291-94522836641111/AnsiballZ_stat.py'
Jan 31 08:02:57 compute-0 sudo[105565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:57 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 31 08:02:57 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 31 08:02:57 compute-0 python3.9[105567]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:57 compute-0 sudo[105565]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:57 compute-0 sudo[105643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnarvzairzzdvxazsgfoinjiekvgchx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846576.8030324-291-94522836641111/AnsiballZ_file.py'
Jan 31 08:02:57 compute-0 sudo[105643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:57 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 31 08:02:57 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 31 08:02:57 compute-0 python3.9[105645]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:02:57 compute-0 sudo[105643]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:58 compute-0 ceph-mon[75294]: 10.1 scrub starts
Jan 31 08:02:58 compute-0 ceph-mon[75294]: 10.1 scrub ok
Jan 31 08:02:58 compute-0 ceph-mon[75294]: pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:58 compute-0 ceph-mon[75294]: 8.16 scrub starts
Jan 31 08:02:58 compute-0 ceph-mon[75294]: 8.16 scrub ok
Jan 31 08:02:58 compute-0 sudo[105795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqmplgilvjnpjedimztaxjmisyctzfqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846577.7756836-304-203906628445385/AnsiballZ_stat.py'
Jan 31 08:02:58 compute-0 sudo[105795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:58 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 08:02:58 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 08:02:58 compute-0 python3.9[105797]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:58 compute-0 sudo[105795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:58 compute-0 sudo[105873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdyeqqjybacuosnghoohzototykpyjvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846577.7756836-304-203906628445385/AnsiballZ_file.py'
Jan 31 08:02:58 compute-0 sudo[105873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:58 compute-0 python3.9[105875]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:02:58 compute-0 sudo[105873]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:58 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 08:02:58 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 08:02:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:02:59 compute-0 ceph-mon[75294]: 11.19 scrub starts
Jan 31 08:02:59 compute-0 ceph-mon[75294]: 11.19 scrub ok
Jan 31 08:02:59 compute-0 sudo[106025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pljiptckaidwgremoxukiuxbetgzpwnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846578.970644-319-213324757573803/AnsiballZ_dnf.py'
Jan 31 08:02:59 compute-0 sudo[106025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:59 compute-0 python3.9[106027]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:00 compute-0 ceph-mon[75294]: 8.1a scrub starts
Jan 31 08:03:00 compute-0 ceph-mon[75294]: 8.1a scrub ok
Jan 31 08:03:00 compute-0 ceph-mon[75294]: 10.1b scrub starts
Jan 31 08:03:00 compute-0 ceph-mon[75294]: 10.1b scrub ok
Jan 31 08:03:00 compute-0 ceph-mon[75294]: pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:00 compute-0 sudo[106025]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:01 compute-0 ceph-mon[75294]: pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 08:03:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 08:03:01 compute-0 python3.9[106178]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:03:02 compute-0 python3.9[106330]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 08:03:02 compute-0 ceph-mon[75294]: 8.1d scrub starts
Jan 31 08:03:02 compute-0 ceph-mon[75294]: 8.1d scrub ok
Jan 31 08:03:02 compute-0 python3.9[106480]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:03:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:03 compute-0 ceph-mon[75294]: pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:03 compute-0 sudo[106630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwonkebvoatqszhmhzufxpqnrflrjdld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846582.9561653-360-90213077347307/AnsiballZ_systemd.py'
Jan 31 08:03:03 compute-0 sudo[106630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:03 compute-0 python3.9[106632]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:03 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 08:03:03 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 08:03:03 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 08:03:03 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 08:03:04 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 08:03:04 compute-0 sudo[106630]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:04 compute-0 python3.9[106793]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 08:03:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:04 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 08:03:04 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 08:03:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:05 compute-0 ceph-mon[75294]: pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:05 compute-0 ceph-mon[75294]: 10.1f scrub starts
Jan 31 08:03:05 compute-0 ceph-mon[75294]: 10.1f scrub ok
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:03:06 compute-0 sudo[106943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhoyaftvmdfqybniasnvvzcabzhcjdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846586.159931-417-37709037807340/AnsiballZ_systemd.py'
Jan 31 08:03:06 compute-0 sudo[106943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:06 compute-0 python3.9[106945]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:06 compute-0 sudo[106943]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:07 compute-0 sudo[107097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqbdvlwmfcqkkjhrjhhrwbpvzqqpannq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846586.8013525-417-244062995587911/AnsiballZ_systemd.py'
Jan 31 08:03:07 compute-0 sudo[107097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:07 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 31 08:03:07 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 31 08:03:07 compute-0 python3.9[107099]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:07 compute-0 sudo[107097]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:07 compute-0 sshd-session[100394]: Connection closed by 192.168.122.30 port 53494
Jan 31 08:03:07 compute-0 sshd-session[100391]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:03:07 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 08:03:07 compute-0 systemd[1]: session-35.scope: Consumed 1min 1.873s CPU time.
Jan 31 08:03:07 compute-0 systemd-logind[810]: Session 35 logged out. Waiting for processes to exit.
Jan 31 08:03:07 compute-0 systemd-logind[810]: Removed session 35.
Jan 31 08:03:07 compute-0 ceph-mon[75294]: pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:08 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 08:03:08 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 08:03:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:08 compute-0 ceph-mon[75294]: 10.1e scrub starts
Jan 31 08:03:08 compute-0 ceph-mon[75294]: 10.1e scrub ok
Jan 31 08:03:09 compute-0 ceph-mon[75294]: 11.f scrub starts
Jan 31 08:03:09 compute-0 ceph-mon[75294]: 11.f scrub ok
Jan 31 08:03:09 compute-0 ceph-mon[75294]: pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 31 08:03:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 31 08:03:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:10 compute-0 ceph-mon[75294]: 8.18 scrub starts
Jan 31 08:03:10 compute-0 ceph-mon[75294]: 8.18 scrub ok
Jan 31 08:03:11 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 31 08:03:11 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 31 08:03:11 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 08:03:11 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 08:03:11 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 08:03:11 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 08:03:11 compute-0 ceph-mon[75294]: pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:11 compute-0 ceph-mon[75294]: 8.e scrub starts
Jan 31 08:03:11 compute-0 ceph-mon[75294]: 8.e scrub ok
Jan 31 08:03:11 compute-0 ceph-mon[75294]: 8.17 scrub starts
Jan 31 08:03:11 compute-0 ceph-mon[75294]: 8.17 scrub ok
Jan 31 08:03:12 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 31 08:03:12 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 31 08:03:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:12 compute-0 ceph-mon[75294]: 10.1d scrub starts
Jan 31 08:03:12 compute-0 ceph-mon[75294]: 10.1d scrub ok
Jan 31 08:03:13 compute-0 sshd-session[107126]: Accepted publickey for zuul from 192.168.122.30 port 43230 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:03:13 compute-0 systemd-logind[810]: New session 36 of user zuul.
Jan 31 08:03:13 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 31 08:03:13 compute-0 sshd-session[107126]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:03:13 compute-0 ceph-mon[75294]: 8.c scrub starts
Jan 31 08:03:13 compute-0 ceph-mon[75294]: 8.c scrub ok
Jan 31 08:03:13 compute-0 ceph-mon[75294]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:14 compute-0 python3.9[107279]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:15 compute-0 sudo[107433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihzhvjwabbuhdyjvkhvojqkevvhapaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846594.8020833-31-211787212362032/AnsiballZ_getent.py'
Jan 31 08:03:15 compute-0 sudo[107433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:15 compute-0 python3.9[107435]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 08:03:15 compute-0 sudo[107433]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:16 compute-0 ceph-mon[75294]: pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:16 compute-0 sudo[107586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxanrcmgwkxqolljkjmictgwsogbcds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846595.847616-43-170699307187653/AnsiballZ_setup.py'
Jan 31 08:03:16 compute-0 sudo[107586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:16 compute-0 python3.9[107588]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:03:16 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 31 08:03:16 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 31 08:03:16 compute-0 sudo[107586]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:16 compute-0 sudo[107670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wojtiuudyesrlqzuqnhfhsrvftiatnfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846595.847616-43-170699307187653/AnsiballZ_dnf.py'
Jan 31 08:03:16 compute-0 sudo[107670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:17 compute-0 ceph-mon[75294]: 11.13 scrub starts
Jan 31 08:03:17 compute-0 ceph-mon[75294]: 11.13 scrub ok
Jan 31 08:03:17 compute-0 python3.9[107672]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 08:03:18 compute-0 ceph-mon[75294]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:18 compute-0 sudo[107670]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:18 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 31 08:03:18 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 31 08:03:18 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 08:03:18 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 08:03:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:18 compute-0 sudo[107823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhcblgprhcxlfyhljxocwrvcvehrecw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846598.6651783-57-137752697757198/AnsiballZ_dnf.py'
Jan 31 08:03:18 compute-0 sudo[107823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:19 compute-0 ceph-mon[75294]: 11.0 scrub starts
Jan 31 08:03:19 compute-0 ceph-mon[75294]: 11.0 scrub ok
Jan 31 08:03:19 compute-0 ceph-mon[75294]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:19 compute-0 python3.9[107825]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:19 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 31 08:03:19 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 31 08:03:20 compute-0 ceph-mon[75294]: 10.1c scrub starts
Jan 31 08:03:20 compute-0 ceph-mon[75294]: 10.1c scrub ok
Jan 31 08:03:20 compute-0 ceph-mon[75294]: 8.3 scrub starts
Jan 31 08:03:20 compute-0 ceph-mon[75294]: 8.3 scrub ok
Jan 31 08:03:20 compute-0 sudo[107823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:20 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 08:03:20 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 08:03:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:21 compute-0 sudo[107976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgxxgptvdmkkyzkcxlvgsmvjtrdyfiwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846600.5062752-65-6671894543618/AnsiballZ_systemd.py'
Jan 31 08:03:21 compute-0 sudo[107976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:21 compute-0 ceph-mon[75294]: 8.8 scrub starts
Jan 31 08:03:21 compute-0 ceph-mon[75294]: 8.8 scrub ok
Jan 31 08:03:21 compute-0 ceph-mon[75294]: pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:21 compute-0 python3.9[107978]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:03:21 compute-0 sudo[107976]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:21 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 08:03:21 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 08:03:22 compute-0 python3.9[108131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:22 compute-0 ceph-mon[75294]: 10.18 scrub starts
Jan 31 08:03:22 compute-0 ceph-mon[75294]: 10.18 scrub ok
Jan 31 08:03:22 compute-0 sudo[108281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecbclveizmnguboyqbfgolbxsfilsxfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846602.3050644-83-82913245831224/AnsiballZ_sefcontext.py'
Jan 31 08:03:22 compute-0 sudo[108281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:22 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 08:03:22 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 08:03:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:22 compute-0 python3.9[108283]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 08:03:23 compute-0 sudo[108281]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:23 compute-0 ceph-mon[75294]: 10.5 scrub starts
Jan 31 08:03:23 compute-0 ceph-mon[75294]: 10.5 scrub ok
Jan 31 08:03:23 compute-0 ceph-mon[75294]: pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:23 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 31 08:03:23 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 31 08:03:23 compute-0 python3.9[108433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:23 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 08:03:23 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 08:03:24 compute-0 sudo[108589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsqiqdqqnnnwxeviqqwbifsaitbpnihw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846604.2596998-101-181996265199597/AnsiballZ_dnf.py'
Jan 31 08:03:24 compute-0 sudo[108589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:24 compute-0 ceph-mon[75294]: 8.0 scrub starts
Jan 31 08:03:24 compute-0 ceph-mon[75294]: 8.0 scrub ok
Jan 31 08:03:24 compute-0 ceph-mon[75294]: 11.1 scrub starts
Jan 31 08:03:24 compute-0 ceph-mon[75294]: 11.1 scrub ok
Jan 31 08:03:24 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 08:03:24 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 08:03:24 compute-0 python3.9[108591]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:25 compute-0 ceph-mon[75294]: 11.c scrub starts
Jan 31 08:03:25 compute-0 ceph-mon[75294]: 11.c scrub ok
Jan 31 08:03:25 compute-0 ceph-mon[75294]: pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 31 08:03:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 31 08:03:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:25 compute-0 sudo[108589]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:26 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 31 08:03:26 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 31 08:03:26 compute-0 sudo[108742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqpdejcdufxnmhuweqkzcoerfmbswugu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846606.1122034-109-250114956809955/AnsiballZ_command.py'
Jan 31 08:03:26 compute-0 sudo[108742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:26 compute-0 ceph-mon[75294]: 11.a scrub starts
Jan 31 08:03:26 compute-0 ceph-mon[75294]: 11.a scrub ok
Jan 31 08:03:26 compute-0 ceph-mon[75294]: 11.e scrub starts
Jan 31 08:03:26 compute-0 ceph-mon[75294]: 11.e scrub ok
Jan 31 08:03:26 compute-0 python3.9[108744]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:03:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:26 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 08:03:26 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 08:03:27 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 08:03:27 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 08:03:27 compute-0 sudo[108742]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:27 compute-0 ceph-mon[75294]: pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:27 compute-0 ceph-mon[75294]: 10.a scrub starts
Jan 31 08:03:27 compute-0 ceph-mon[75294]: 10.a scrub ok
Jan 31 08:03:27 compute-0 ceph-mon[75294]: 10.17 scrub starts
Jan 31 08:03:27 compute-0 ceph-mon[75294]: 10.17 scrub ok
Jan 31 08:03:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 08:03:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 08:03:27 compute-0 sudo[109029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnaeghppyikqwffrhvoytspghafnokve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846607.4819179-117-165350695185977/AnsiballZ_file.py'
Jan 31 08:03:27 compute-0 sudo[109029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:28 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 31 08:03:28 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 31 08:03:28 compute-0 python3.9[109031]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 08:03:28 compute-0 sudo[109029]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:28 compute-0 ceph-mon[75294]: 10.c scrub starts
Jan 31 08:03:28 compute-0 ceph-mon[75294]: 10.c scrub ok
Jan 31 08:03:28 compute-0 ceph-mon[75294]: 8.9 scrub starts
Jan 31 08:03:28 compute-0 ceph-mon[75294]: 8.9 scrub ok
Jan 31 08:03:28 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 08:03:28 compute-0 python3.9[109181]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:03:28 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 08:03:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:29 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 31 08:03:29 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 31 08:03:29 compute-0 sudo[109333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txmiyasrlxgcboqitxxsmaeleravtbby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846608.8866181-133-37263382118651/AnsiballZ_dnf.py'
Jan 31 08:03:29 compute-0 sudo[109333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:29 compute-0 python3.9[109335]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:29 compute-0 ceph-mon[75294]: 8.1 scrub starts
Jan 31 08:03:29 compute-0 ceph-mon[75294]: 8.1 scrub ok
Jan 31 08:03:29 compute-0 ceph-mon[75294]: pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:29 compute-0 ceph-mon[75294]: 10.4 scrub starts
Jan 31 08:03:29 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 08:03:29 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 08:03:29 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 31 08:03:29 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 31 08:03:30 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 31 08:03:30 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 31 08:03:30 compute-0 sudo[109338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:30 compute-0 sudo[109338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:30 compute-0 sudo[109338]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:30 compute-0 sudo[109363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:03:30 compute-0 sudo[109363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:30 compute-0 sudo[109333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:30 compute-0 sudo[109363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:03:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:03:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:03:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:03:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:03:30 compute-0 ceph-mon[75294]: 10.4 scrub ok
Jan 31 08:03:30 compute-0 ceph-mon[75294]: 8.7 scrub starts
Jan 31 08:03:30 compute-0 ceph-mon[75294]: 8.7 scrub ok
Jan 31 08:03:30 compute-0 ceph-mon[75294]: 10.0 scrub starts
Jan 31 08:03:30 compute-0 ceph-mon[75294]: 10.0 scrub ok
Jan 31 08:03:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:03:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:03:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:03:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:03:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:03:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:03:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:03:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:30 compute-0 sudo[109466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:30 compute-0 sudo[109466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:30 compute-0 sudo[109466]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:30 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 31 08:03:30 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 31 08:03:30 compute-0 sudo[109519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:03:30 compute-0 sudo[109519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:31 compute-0 sudo[109617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhqmayfnisnpcghjsnuvcvqqvotdlmoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846610.79844-142-152360721993227/AnsiballZ_dnf.py'
Jan 31 08:03:31 compute-0 sudo[109617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:31 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 31 08:03:31 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 31 08:03:31 compute-0 podman[109632]: 2026-01-31 08:03:31.161786672 +0000 UTC m=+0.021079213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:03:31 compute-0 podman[109632]: 2026-01-31 08:03:31.411921504 +0000 UTC m=+0.271214045 container create 654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:03:31 compute-0 systemd[1]: Started libpod-conmon-654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598.scope.
Jan 31 08:03:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:31 compute-0 podman[109632]: 2026-01-31 08:03:31.517375838 +0000 UTC m=+0.376668409 container init 654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:03:31 compute-0 podman[109632]: 2026-01-31 08:03:31.524249655 +0000 UTC m=+0.383542196 container start 654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:03:31 compute-0 laughing_darwin[109648]: 167 167
Jan 31 08:03:31 compute-0 systemd[1]: libpod-654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598.scope: Deactivated successfully.
Jan 31 08:03:31 compute-0 conmon[109648]: conmon 654faca2c878239dd094 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598.scope/container/memory.events
Jan 31 08:03:31 compute-0 podman[109632]: 2026-01-31 08:03:31.537866504 +0000 UTC m=+0.397159125 container attach 654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:03:31 compute-0 podman[109632]: 2026-01-31 08:03:31.539526901 +0000 UTC m=+0.398819442 container died 654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-42f640d3f3e19c282eefca8e2e9dc9c36661ecc17372a52fa22af249d204bc0e-merged.mount: Deactivated successfully.
Jan 31 08:03:31 compute-0 python3.9[109619]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:31 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 08:03:31 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 08:03:31 compute-0 ceph-mon[75294]: 11.4 scrub starts
Jan 31 08:03:31 compute-0 ceph-mon[75294]: 11.4 scrub ok
Jan 31 08:03:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:03:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:03:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:03:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:03:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:03:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:03:31 compute-0 ceph-mon[75294]: pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:31 compute-0 ceph-mon[75294]: 10.3 scrub starts
Jan 31 08:03:31 compute-0 ceph-mon[75294]: 10.3 scrub ok
Jan 31 08:03:31 compute-0 podman[109632]: 2026-01-31 08:03:31.883075743 +0000 UTC m=+0.742368314 container remove 654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:03:31 compute-0 systemd[1]: libpod-conmon-654faca2c878239dd0941a0a861cdfda06c06bf7eaed33c584ae53e852e8b598.scope: Deactivated successfully.
Jan 31 08:03:32 compute-0 podman[109674]: 2026-01-31 08:03:31.979666174 +0000 UTC m=+0.020689822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:03:32 compute-0 podman[109674]: 2026-01-31 08:03:32.16978298 +0000 UTC m=+0.210806608 container create 8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:03:32 compute-0 systemd[1]: Started libpod-conmon-8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907.scope.
Jan 31 08:03:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44acef0030fa335b2110ab3280ac76bed8d457f359b53f5be4f24f424bd22516/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44acef0030fa335b2110ab3280ac76bed8d457f359b53f5be4f24f424bd22516/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44acef0030fa335b2110ab3280ac76bed8d457f359b53f5be4f24f424bd22516/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44acef0030fa335b2110ab3280ac76bed8d457f359b53f5be4f24f424bd22516/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44acef0030fa335b2110ab3280ac76bed8d457f359b53f5be4f24f424bd22516/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:32 compute-0 podman[109674]: 2026-01-31 08:03:32.323175835 +0000 UTC m=+0.364199483 container init 8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:03:32 compute-0 podman[109674]: 2026-01-31 08:03:32.332504602 +0000 UTC m=+0.373528230 container start 8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:03:32 compute-0 podman[109674]: 2026-01-31 08:03:32.338674798 +0000 UTC m=+0.379698446 container attach 8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:03:32 compute-0 sharp_khorana[109692]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:03:32 compute-0 sharp_khorana[109692]: --> All data devices are unavailable
Jan 31 08:03:32 compute-0 systemd[1]: libpod-8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907.scope: Deactivated successfully.
Jan 31 08:03:32 compute-0 podman[109674]: 2026-01-31 08:03:32.742642317 +0000 UTC m=+0.783665985 container died 8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-44acef0030fa335b2110ab3280ac76bed8d457f359b53f5be4f24f424bd22516-merged.mount: Deactivated successfully.
Jan 31 08:03:32 compute-0 podman[109674]: 2026-01-31 08:03:32.797237578 +0000 UTC m=+0.838261206 container remove 8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_khorana, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:03:32 compute-0 systemd[1]: libpod-conmon-8311db14e64ff536ac95bd26eab95bd8931744f03bda375d705ce47bc3cad907.scope: Deactivated successfully.
Jan 31 08:03:32 compute-0 ceph-mon[75294]: 11.14 scrub starts
Jan 31 08:03:32 compute-0 ceph-mon[75294]: 11.14 scrub ok
Jan 31 08:03:32 compute-0 ceph-mon[75294]: 11.5 scrub starts
Jan 31 08:03:32 compute-0 ceph-mon[75294]: 11.5 scrub ok
Jan 31 08:03:32 compute-0 sudo[109519]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:32 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 08:03:32 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 08:03:32 compute-0 sudo[109723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:32 compute-0 sudo[109723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:32 compute-0 sudo[109723]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:32 compute-0 sudo[109617]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:32 compute-0 sudo[109748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:03:32 compute-0 sudo[109748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:33 compute-0 podman[109814]: 2026-01-31 08:03:33.169206143 +0000 UTC m=+0.043366681 container create ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chebyshev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:03:33 compute-0 systemd[1]: Started libpod-conmon-ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232.scope.
Jan 31 08:03:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:33 compute-0 podman[109814]: 2026-01-31 08:03:33.242822307 +0000 UTC m=+0.116982875 container init ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:03:33 compute-0 podman[109814]: 2026-01-31 08:03:33.145409942 +0000 UTC m=+0.019570500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:03:33 compute-0 podman[109814]: 2026-01-31 08:03:33.249399645 +0000 UTC m=+0.123560173 container start ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chebyshev, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 31 08:03:33 compute-0 systemd[1]: libpod-ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232.scope: Deactivated successfully.
Jan 31 08:03:33 compute-0 objective_chebyshev[109883]: 167 167
Jan 31 08:03:33 compute-0 conmon[109883]: conmon ffe62ed87036dd053856 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232.scope/container/memory.events
Jan 31 08:03:33 compute-0 podman[109814]: 2026-01-31 08:03:33.256167519 +0000 UTC m=+0.130328057 container attach ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chebyshev, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:03:33 compute-0 podman[109814]: 2026-01-31 08:03:33.256819407 +0000 UTC m=+0.130979935 container died ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f1914255ef59316453eb791dc0bb64ec106a52e37d25c125232e5bac135167f-merged.mount: Deactivated successfully.
Jan 31 08:03:33 compute-0 podman[109814]: 2026-01-31 08:03:33.312857639 +0000 UTC m=+0.187018167 container remove ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chebyshev, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:03:33 compute-0 systemd[1]: libpod-conmon-ffe62ed87036dd053856c0cb467f423b30d03bb82f018fd304ded3773be6c232.scope: Deactivated successfully.
Jan 31 08:03:33 compute-0 sudo[109972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cirzhoqbxdbtpqbisqajnhctyqnlaslx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846613.1352859-154-96618890427568/AnsiballZ_stat.py'
Jan 31 08:03:33 compute-0 sudo[109972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:33 compute-0 podman[109980]: 2026-01-31 08:03:33.42971054 +0000 UTC m=+0.039504441 container create 436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_moore, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:03:33 compute-0 systemd[1]: Started libpod-conmon-436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a.scope.
Jan 31 08:03:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f607eb4c1d991ab68c6ea8cbeda7a8bf60b07d68e2309cdab2ed858b33981e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:33 compute-0 podman[109980]: 2026-01-31 08:03:33.408196105 +0000 UTC m=+0.017990026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f607eb4c1d991ab68c6ea8cbeda7a8bf60b07d68e2309cdab2ed858b33981e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f607eb4c1d991ab68c6ea8cbeda7a8bf60b07d68e2309cdab2ed858b33981e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f607eb4c1d991ab68c6ea8cbeda7a8bf60b07d68e2309cdab2ed858b33981e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:33 compute-0 python3.9[109974]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:03:33 compute-0 podman[109980]: 2026-01-31 08:03:33.520600709 +0000 UTC m=+0.130394640 container init 436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:03:33 compute-0 podman[109980]: 2026-01-31 08:03:33.526051885 +0000 UTC m=+0.135845826 container start 436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:03:33 compute-0 podman[109980]: 2026-01-31 08:03:33.534878307 +0000 UTC m=+0.144672238 container attach 436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:03:33 compute-0 sudo[109972]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:33 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 31 08:03:33 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 31 08:03:33 compute-0 focused_moore[109997]: {
Jan 31 08:03:33 compute-0 focused_moore[109997]:     "0": [
Jan 31 08:03:33 compute-0 focused_moore[109997]:         {
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "devices": [
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "/dev/loop3"
Jan 31 08:03:33 compute-0 focused_moore[109997]:             ],
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_name": "ceph_lv0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_size": "21470642176",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "name": "ceph_lv0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "tags": {
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cluster_name": "ceph",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.crush_device_class": "",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.encrypted": "0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.objectstore": "bluestore",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osd_id": "0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.type": "block",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.vdo": "0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.with_tpm": "0"
Jan 31 08:03:33 compute-0 focused_moore[109997]:             },
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "type": "block",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "vg_name": "ceph_vg0"
Jan 31 08:03:33 compute-0 focused_moore[109997]:         }
Jan 31 08:03:33 compute-0 focused_moore[109997]:     ],
Jan 31 08:03:33 compute-0 focused_moore[109997]:     "1": [
Jan 31 08:03:33 compute-0 focused_moore[109997]:         {
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "devices": [
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "/dev/loop4"
Jan 31 08:03:33 compute-0 focused_moore[109997]:             ],
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_name": "ceph_lv1",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_size": "21470642176",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "name": "ceph_lv1",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "tags": {
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cluster_name": "ceph",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.crush_device_class": "",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.encrypted": "0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.objectstore": "bluestore",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osd_id": "1",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.type": "block",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.vdo": "0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.with_tpm": "0"
Jan 31 08:03:33 compute-0 focused_moore[109997]:             },
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "type": "block",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "vg_name": "ceph_vg1"
Jan 31 08:03:33 compute-0 focused_moore[109997]:         }
Jan 31 08:03:33 compute-0 focused_moore[109997]:     ],
Jan 31 08:03:33 compute-0 focused_moore[109997]:     "2": [
Jan 31 08:03:33 compute-0 focused_moore[109997]:         {
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "devices": [
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "/dev/loop5"
Jan 31 08:03:33 compute-0 focused_moore[109997]:             ],
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_name": "ceph_lv2",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_size": "21470642176",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "name": "ceph_lv2",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "tags": {
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.cluster_name": "ceph",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.crush_device_class": "",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.encrypted": "0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.objectstore": "bluestore",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osd_id": "2",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.type": "block",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.vdo": "0",
Jan 31 08:03:33 compute-0 focused_moore[109997]:                 "ceph.with_tpm": "0"
Jan 31 08:03:33 compute-0 focused_moore[109997]:             },
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "type": "block",
Jan 31 08:03:33 compute-0 focused_moore[109997]:             "vg_name": "ceph_vg2"
Jan 31 08:03:33 compute-0 focused_moore[109997]:         }
Jan 31 08:03:33 compute-0 focused_moore[109997]:     ]
Jan 31 08:03:33 compute-0 focused_moore[109997]: }
Jan 31 08:03:33 compute-0 systemd[1]: libpod-436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a.scope: Deactivated successfully.
Jan 31 08:03:33 compute-0 ceph-mon[75294]: pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:33 compute-0 ceph-mon[75294]: 11.15 scrub starts
Jan 31 08:03:33 compute-0 ceph-mon[75294]: 11.15 scrub ok
Jan 31 08:03:33 compute-0 podman[110084]: 2026-01-31 08:03:33.829990794 +0000 UTC m=+0.022567467 container died 436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_moore, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f607eb4c1d991ab68c6ea8cbeda7a8bf60b07d68e2309cdab2ed858b33981e-merged.mount: Deactivated successfully.
Jan 31 08:03:33 compute-0 podman[110084]: 2026-01-31 08:03:33.920870941 +0000 UTC m=+0.113447604 container remove 436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:03:33 compute-0 systemd[1]: libpod-conmon-436e412229d9a6cc544690a5a80e9fd40dd8b8fd47f4ddb86433bb150758af3a.scope: Deactivated successfully.
Jan 31 08:03:33 compute-0 sudo[109748]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:34 compute-0 sudo[110144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:34 compute-0 sudo[110193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdwdqxqvxhdwuyypkjssutkvcwunwyvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846613.6520994-162-85288681675897/AnsiballZ_slurp.py'
Jan 31 08:03:34 compute-0 sudo[110144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:34 compute-0 sudo[110193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:34 compute-0 sudo[110144]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:34 compute-0 sudo[110198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:03:34 compute-0 sudo[110198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:34 compute-0 python3.9[110197]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 08:03:34 compute-0 sudo[110193]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:34 compute-0 podman[110259]: 2026-01-31 08:03:34.311292583 +0000 UTC m=+0.032224882 container create 35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:03:34 compute-0 systemd[1]: Started libpod-conmon-35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d.scope.
Jan 31 08:03:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:34 compute-0 podman[110259]: 2026-01-31 08:03:34.390950201 +0000 UTC m=+0.111882590 container init 35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:03:34 compute-0 podman[110259]: 2026-01-31 08:03:34.297002535 +0000 UTC m=+0.017934854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:03:34 compute-0 podman[110259]: 2026-01-31 08:03:34.395183182 +0000 UTC m=+0.116115501 container start 35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:03:34 compute-0 heuristic_mahavira[110275]: 167 167
Jan 31 08:03:34 compute-0 podman[110259]: 2026-01-31 08:03:34.400438302 +0000 UTC m=+0.121370781 container attach 35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:03:34 compute-0 systemd[1]: libpod-35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d.scope: Deactivated successfully.
Jan 31 08:03:34 compute-0 podman[110259]: 2026-01-31 08:03:34.40138592 +0000 UTC m=+0.122318299 container died 35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd3ec180f6c81f92fa2810e94a02277e78308701f054ff77e565ef6d64d7c8ff-merged.mount: Deactivated successfully.
Jan 31 08:03:34 compute-0 podman[110259]: 2026-01-31 08:03:34.453012756 +0000 UTC m=+0.173945065 container remove 35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:03:34 compute-0 systemd[1]: libpod-conmon-35c96da25d06a9e8da29857bfcadbe1609e815a8bb2de34f3628eca74fc38c6d.scope: Deactivated successfully.
Jan 31 08:03:34 compute-0 podman[110299]: 2026-01-31 08:03:34.603101966 +0000 UTC m=+0.046523660 container create 42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:03:34 compute-0 systemd[1]: Started libpod-conmon-42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab.scope.
Jan 31 08:03:34 compute-0 podman[110299]: 2026-01-31 08:03:34.572189873 +0000 UTC m=+0.015611567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:03:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0115dc40a1a5d72b994993328c37426dfc40ae408245da91408611f622f3ff03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0115dc40a1a5d72b994993328c37426dfc40ae408245da91408611f622f3ff03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0115dc40a1a5d72b994993328c37426dfc40ae408245da91408611f622f3ff03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0115dc40a1a5d72b994993328c37426dfc40ae408245da91408611f622f3ff03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:34 compute-0 podman[110299]: 2026-01-31 08:03:34.693420688 +0000 UTC m=+0.136842382 container init 42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:03:34 compute-0 podman[110299]: 2026-01-31 08:03:34.698540655 +0000 UTC m=+0.141962329 container start 42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:03:34 compute-0 podman[110299]: 2026-01-31 08:03:34.702019824 +0000 UTC m=+0.145441498 container attach 42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:03:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:34 compute-0 ceph-mon[75294]: 8.5 scrub starts
Jan 31 08:03:34 compute-0 ceph-mon[75294]: 8.5 scrub ok
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.900757) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846614900852, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7314, "num_deletes": 252, "total_data_size": 9801122, "memory_usage": 10057952, "flush_reason": "Manual Compaction"}
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846614959596, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7749609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 153, "largest_seqno": 7464, "table_properties": {"data_size": 7722069, "index_size": 18273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75332, "raw_average_key_size": 23, "raw_value_size": 7658549, "raw_average_value_size": 2348, "num_data_blocks": 800, "num_entries": 3261, "num_filter_entries": 3261, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846188, "oldest_key_time": 1769846188, "file_creation_time": 1769846614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 58905 microseconds, and 11060 cpu microseconds.
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.959646) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7749609 bytes OK
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.959691) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.963358) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.963377) EVENT_LOG_v1 {"time_micros": 1769846614963371, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.963413) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9769208, prev total WAL file size 9769208, number of live WAL files 2.
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.965037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7567KB) 13(59KB) 8(1944B)]
Jan 31 08:03:34 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846614965113, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7812779, "oldest_snapshot_seqno": -1}
Jan 31 08:03:34 compute-0 sshd-session[107129]: Connection closed by 192.168.122.30 port 43230
Jan 31 08:03:34 compute-0 sshd-session[107126]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:03:34 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 08:03:34 compute-0 systemd[1]: session-36.scope: Consumed 15.843s CPU time.
Jan 31 08:03:34 compute-0 systemd-logind[810]: Session 36 logged out. Waiting for processes to exit.
Jan 31 08:03:34 compute-0 systemd-logind[810]: Removed session 36.
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3088 keys, 7764882 bytes, temperature: kUnknown
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846615017719, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7764882, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7737781, "index_size": 18287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 73776, "raw_average_key_size": 23, "raw_value_size": 7675596, "raw_average_value_size": 2485, "num_data_blocks": 803, "num_entries": 3088, "num_filter_entries": 3088, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769846614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:35.017953) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7764882 bytes
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:35.023549) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.2 rd, 147.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3379, records dropped: 291 output_compression: NoCompression
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:35.023576) EVENT_LOG_v1 {"time_micros": 1769846615023563, "job": 4, "event": "compaction_finished", "compaction_time_micros": 52715, "compaction_time_cpu_micros": 10912, "output_level": 6, "num_output_files": 1, "total_output_size": 7764882, "num_input_records": 3379, "num_output_records": 3088, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846615024283, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846615024349, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846615024397, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 08:03:35 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:03:34.964953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:35 compute-0 lvm[110395]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:03:35 compute-0 lvm[110395]: VG ceph_vg1 finished
Jan 31 08:03:35 compute-0 lvm[110392]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:03:35 compute-0 lvm[110392]: VG ceph_vg0 finished
Jan 31 08:03:35 compute-0 lvm[110397]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:03:35 compute-0 lvm[110397]: VG ceph_vg2 finished
Jan 31 08:03:35 compute-0 busy_hertz[110315]: {}
Jan 31 08:03:35 compute-0 systemd[1]: libpod-42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab.scope: Deactivated successfully.
Jan 31 08:03:35 compute-0 podman[110400]: 2026-01-31 08:03:35.475041774 +0000 UTC m=+0.021820905 container died 42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0115dc40a1a5d72b994993328c37426dfc40ae408245da91408611f622f3ff03-merged.mount: Deactivated successfully.
Jan 31 08:03:35 compute-0 podman[110400]: 2026-01-31 08:03:35.534923166 +0000 UTC m=+0.081702277 container remove 42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:03:35 compute-0 systemd[1]: libpod-conmon-42a5ff9711cbbaa86b9b4a6dd7b7f12eb73c9648004b3e3cd76e1ebc65c6b7ab.scope: Deactivated successfully.
Jan 31 08:03:35 compute-0 sudo[110198]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:03:35 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:03:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:03:35 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:03:35 compute-0 sudo[110416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:03:35 compute-0 sudo[110416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:35 compute-0 sudo[110416]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 31 08:03:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 31 08:03:35 compute-0 ceph-mon[75294]: pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:03:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:03:36 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 31 08:03:36 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 31 08:03:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:36 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 08:03:36 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 08:03:36 compute-0 ceph-mon[75294]: 11.1f scrub starts
Jan 31 08:03:36 compute-0 ceph-mon[75294]: 11.1f scrub ok
Jan 31 08:03:36 compute-0 ceph-mon[75294]: 11.7 scrub starts
Jan 31 08:03:36 compute-0 ceph-mon[75294]: 11.7 scrub ok
Jan 31 08:03:37 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 08:03:37 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 08:03:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 08:03:37 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 08:03:37 compute-0 ceph-mon[75294]: pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:37 compute-0 ceph-mon[75294]: 8.1b scrub starts
Jan 31 08:03:37 compute-0 ceph-mon[75294]: 8.1b scrub ok
Jan 31 08:03:37 compute-0 ceph-mon[75294]: 8.19 scrub starts
Jan 31 08:03:37 compute-0 ceph-mon[75294]: 8.19 scrub ok
Jan 31 08:03:38 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 08:03:38 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 08:03:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:38 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 08:03:38 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 08:03:38 compute-0 ceph-mon[75294]: 10.8 scrub starts
Jan 31 08:03:38 compute-0 ceph-mon[75294]: 10.8 scrub ok
Jan 31 08:03:38 compute-0 ceph-mon[75294]: 11.1d scrub starts
Jan 31 08:03:38 compute-0 ceph-mon[75294]: 11.1d scrub ok
Jan 31 08:03:39 compute-0 ceph-mon[75294]: pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:39 compute-0 ceph-mon[75294]: 11.18 scrub starts
Jan 31 08:03:39 compute-0 ceph-mon[75294]: 11.18 scrub ok
Jan 31 08:03:40 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 08:03:40 compute-0 sshd-session[110441]: Accepted publickey for zuul from 192.168.122.30 port 58108 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:03:40 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 08:03:40 compute-0 systemd-logind[810]: New session 37 of user zuul.
Jan 31 08:03:40 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 31 08:03:40 compute-0 sshd-session[110441]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:03:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:40 compute-0 ceph-mon[75294]: 8.1e scrub starts
Jan 31 08:03:40 compute-0 ceph-mon[75294]: 8.1e scrub ok
Jan 31 08:03:41 compute-0 python3.9[110594]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:41 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 08:03:41 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 08:03:41 compute-0 ceph-mon[75294]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:42 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 31 08:03:42 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 31 08:03:42 compute-0 python3.9[110748]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:03:42 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 31 08:03:42 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 31 08:03:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:42 compute-0 ceph-mon[75294]: 11.1e scrub starts
Jan 31 08:03:42 compute-0 ceph-mon[75294]: 11.1e scrub ok
Jan 31 08:03:42 compute-0 ceph-mon[75294]: 8.a scrub starts
Jan 31 08:03:42 compute-0 ceph-mon[75294]: 8.a scrub ok
Jan 31 08:03:43 compute-0 python3.9[110941]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:03:43 compute-0 sshd-session[110444]: Connection closed by 192.168.122.30 port 58108
Jan 31 08:03:43 compute-0 sshd-session[110441]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:03:43 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 08:03:43 compute-0 systemd[1]: session-37.scope: Consumed 1.943s CPU time.
Jan 31 08:03:43 compute-0 systemd-logind[810]: Session 37 logged out. Waiting for processes to exit.
Jan 31 08:03:43 compute-0 systemd-logind[810]: Removed session 37.
Jan 31 08:03:44 compute-0 ceph-mon[75294]: 8.b scrub starts
Jan 31 08:03:44 compute-0 ceph-mon[75294]: 8.b scrub ok
Jan 31 08:03:44 compute-0 ceph-mon[75294]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:44 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 08:03:44 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 08:03:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:45 compute-0 ceph-mon[75294]: 8.13 scrub starts
Jan 31 08:03:45 compute-0 ceph-mon[75294]: 8.13 scrub ok
Jan 31 08:03:45 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 31 08:03:45 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 31 08:03:45 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 08:03:45 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 08:03:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:46 compute-0 ceph-mon[75294]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:46 compute-0 ceph-mon[75294]: 10.13 scrub starts
Jan 31 08:03:46 compute-0 ceph-mon[75294]: 10.13 scrub ok
Jan 31 08:03:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:47 compute-0 ceph-mon[75294]: 8.10 scrub starts
Jan 31 08:03:47 compute-0 ceph-mon[75294]: 8.10 scrub ok
Jan 31 08:03:47 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 08:03:47 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 08:03:47 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 31 08:03:47 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 31 08:03:48 compute-0 ceph-mon[75294]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:49 compute-0 ceph-mon[75294]: 10.7 scrub starts
Jan 31 08:03:49 compute-0 ceph-mon[75294]: 10.7 scrub ok
Jan 31 08:03:49 compute-0 ceph-mon[75294]: 8.11 scrub starts
Jan 31 08:03:49 compute-0 ceph-mon[75294]: 8.11 scrub ok
Jan 31 08:03:49 compute-0 ceph-mon[75294]: pgmap v337: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:49 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 08:03:49 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 08:03:49 compute-0 sshd-session[110967]: Accepted publickey for zuul from 192.168.122.30 port 38382 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:03:49 compute-0 systemd-logind[810]: New session 38 of user zuul.
Jan 31 08:03:49 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 31 08:03:49 compute-0 sshd-session[110967]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:03:49 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 08:03:49 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 08:03:50 compute-0 ceph-mon[75294]: 11.10 scrub starts
Jan 31 08:03:50 compute-0 ceph-mon[75294]: 11.10 scrub ok
Jan 31 08:03:50 compute-0 python3.9[111120]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:03:50
Jan 31 08:03:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:03:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:03:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'default.rgw.control']
Jan 31 08:03:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:03:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:51 compute-0 ceph-mon[75294]: 11.1c scrub starts
Jan 31 08:03:51 compute-0 ceph-mon[75294]: 11.1c scrub ok
Jan 31 08:03:51 compute-0 ceph-mon[75294]: pgmap v338: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:51 compute-0 python3.9[111274]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:51 compute-0 sudo[111428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsnpqdrtxsblgixbtyvozzfntqcpdhxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846631.6591268-35-195498101174567/AnsiballZ_setup.py'
Jan 31 08:03:51 compute-0 sudo[111428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:52 compute-0 python3.9[111430]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:03:52 compute-0 sudo[111428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:52 compute-0 sudo[111512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czmwxbveuywutymhdthlhishihlgkokr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846631.6591268-35-195498101174567/AnsiballZ_dnf.py'
Jan 31 08:03:52 compute-0 sudo[111512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:52 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 08:03:52 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 08:03:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:52 compute-0 python3.9[111514]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:53 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 31 08:03:53 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 31 08:03:53 compute-0 ceph-mon[75294]: 11.12 scrub starts
Jan 31 08:03:53 compute-0 ceph-mon[75294]: 11.12 scrub ok
Jan 31 08:03:53 compute-0 ceph-mon[75294]: pgmap v339: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:53 compute-0 ceph-mon[75294]: 10.f scrub starts
Jan 31 08:03:53 compute-0 ceph-mon[75294]: 10.f scrub ok
Jan 31 08:03:53 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 08:03:54 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 08:03:54 compute-0 sudo[111512]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:54 compute-0 sudo[111665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woonorbkmuntwfdsmwaukwslzktlplwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846634.3612552-47-223774389923690/AnsiballZ_setup.py'
Jan 31 08:03:54 compute-0 sudo[111665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:54 compute-0 python3.9[111667]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:03:54 compute-0 ceph-mon[75294]: 10.e scrub starts
Jan 31 08:03:54 compute-0 ceph-mon[75294]: 10.e scrub ok
Jan 31 08:03:54 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 31 08:03:55 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 31 08:03:55 compute-0 sudo[111665]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:03:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:03:55 compute-0 sudo[111860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyzuhvekehxmrsqbdqiqjgxzbffkwcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846635.3746932-58-149948570299764/AnsiballZ_file.py'
Jan 31 08:03:55 compute-0 sudo[111860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:55 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 08:03:55 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 08:03:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:03:56 compute-0 python3.9[111862]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 08:03:56 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 08:03:56 compute-0 sudo[111860]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:56 compute-0 ceph-mon[75294]: pgmap v340: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:56 compute-0 ceph-mon[75294]: 8.f scrub starts
Jan 31 08:03:56 compute-0 ceph-mon[75294]: 8.f scrub ok
Jan 31 08:03:56 compute-0 sudo[112012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdwdivtfhjauqeyoenrkkjzqduojadxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846636.1946595-66-247225096078803/AnsiballZ_command.py'
Jan 31 08:03:56 compute-0 sudo[112012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:56 compute-0 python3.9[112014]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:03:56 compute-0 sudo[112012]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:57 compute-0 ceph-mon[75294]: 11.1b scrub starts
Jan 31 08:03:57 compute-0 ceph-mon[75294]: 11.1b scrub ok
Jan 31 08:03:57 compute-0 ceph-mon[75294]: 10.d scrub starts
Jan 31 08:03:57 compute-0 ceph-mon[75294]: 10.d scrub ok
Jan 31 08:03:57 compute-0 ceph-mon[75294]: pgmap v341: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:57 compute-0 sudo[112177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrnucbdwbkjkpiblsousridrwxtvevvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846636.9731348-74-152066609520900/AnsiballZ_stat.py'
Jan 31 08:03:57 compute-0 sudo[112177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:57 compute-0 python3.9[112179]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:57 compute-0 sudo[112177]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:57 compute-0 sudo[112255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkcvbdutoffoeqipnccszxztpctahbpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846636.9731348-74-152066609520900/AnsiballZ_file.py'
Jan 31 08:03:57 compute-0 sudo[112255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:57 compute-0 python3.9[112257]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:57 compute-0 sudo[112255]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:58 compute-0 sudo[112407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzaotkbmcvknckctnzlcgjcnxyiwyhcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846638.099866-86-22535233672175/AnsiballZ_stat.py'
Jan 31 08:03:58 compute-0 sudo[112407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:58 compute-0 python3.9[112409]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:58 compute-0 sudo[112407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:58 compute-0 sudo[112485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhdcvdbhdebmszeqvctfzbgequrozjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846638.099866-86-22535233672175/AnsiballZ_file.py'
Jan 31 08:03:58 compute-0 sudo[112485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:03:58 compute-0 python3.9[112487]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:58 compute-0 sudo[112485]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:59 compute-0 sudo[112637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsikiioyehyelhjwxuoxxfgczepyvqlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846639.0938628-99-240960680853533/AnsiballZ_ini_file.py'
Jan 31 08:03:59 compute-0 sudo[112637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:59 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 08:03:59 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 08:03:59 compute-0 python3.9[112639]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:59 compute-0 sudo[112637]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:59 compute-0 ceph-mon[75294]: pgmap v342: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:00 compute-0 sudo[112789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylaxghlugjqekruapuntlqcvuckncazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846639.8116922-99-115696840558528/AnsiballZ_ini_file.py'
Jan 31 08:04:00 compute-0 sudo[112789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:00 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 08:04:00 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 08:04:00 compute-0 python3.9[112791]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:04:00 compute-0 sudo[112789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:00 compute-0 sudo[112941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgudokzchdpdfwsiyxnhslojgnkoznyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846640.3873622-99-215009742258183/AnsiballZ_ini_file.py'
Jan 31 08:04:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 31 08:04:00 compute-0 sudo[112941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 31 08:04:00 compute-0 python3.9[112943]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:04:00 compute-0 sudo[112941]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:00 compute-0 ceph-mon[75294]: 11.11 scrub starts
Jan 31 08:04:00 compute-0 ceph-mon[75294]: 11.11 scrub ok
Jan 31 08:04:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 31 08:04:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 31 08:04:01 compute-0 sudo[113093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emzbrbzwjnmsoosjovakzvleoxvqxlkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846640.9663916-99-123908631424005/AnsiballZ_ini_file.py'
Jan 31 08:04:01 compute-0 sudo[113093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:01 compute-0 python3.9[113095]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:04:01 compute-0 sudo[113093]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:01 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 08:04:01 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 08:04:01 compute-0 sudo[113245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvtzsjeuwpbdvuzousqnfnmlhxweirtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846641.584479-130-160389343732399/AnsiballZ_dnf.py'
Jan 31 08:04:01 compute-0 sudo[113245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 11.6 scrub starts
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 11.6 scrub ok
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 8.1c scrub starts
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 8.1c scrub ok
Jan 31 08:04:01 compute-0 ceph-mon[75294]: pgmap v343: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 8.6 scrub starts
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 8.6 scrub ok
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 10.b scrub starts
Jan 31 08:04:01 compute-0 ceph-mon[75294]: 10.b scrub ok
Jan 31 08:04:02 compute-0 python3.9[113247]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:04:02 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 08:04:02 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 08:04:02 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 31 08:04:02 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 31 08:04:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:02 compute-0 ceph-mon[75294]: 10.15 scrub starts
Jan 31 08:04:02 compute-0 ceph-mon[75294]: 10.6 scrub starts
Jan 31 08:04:02 compute-0 ceph-mon[75294]: 10.6 scrub ok
Jan 31 08:04:03 compute-0 sudo[113245]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:03 compute-0 sudo[113398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yactvstsnflqnyxakihsdajsmjuawyhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846643.539939-141-172290934294881/AnsiballZ_setup.py'
Jan 31 08:04:03 compute-0 sudo[113398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:03 compute-0 ceph-mon[75294]: 10.15 scrub ok
Jan 31 08:04:03 compute-0 ceph-mon[75294]: pgmap v344: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:04 compute-0 python3.9[113400]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:04:04 compute-0 sudo[113398]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:04 compute-0 sudo[113552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvkkndmwgetdfaofykmmbitdobokiyyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846644.1720884-149-62144799920335/AnsiballZ_stat.py'
Jan 31 08:04:04 compute-0 sudo[113552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:04 compute-0 python3.9[113554]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:04:04 compute-0 sudo[113552]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:04 compute-0 sudo[113704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnkxhoutxtkuniigkiyrfgaemdecmzkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846644.7562644-158-183932107118314/AnsiballZ_stat.py'
Jan 31 08:04:04 compute-0 sudo[113704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:05 compute-0 python3.9[113706]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:04:05 compute-0 sudo[113704]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:05 compute-0 sudo[113856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykrswrnjjlrvfoxvsajryftlahtzwff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846645.3921564-168-221510428339841/AnsiballZ_command.py'
Jan 31 08:04:05 compute-0 sudo[113856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:05 compute-0 python3.9[113858]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:05 compute-0 sudo[113856]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:05 compute-0 ceph-mon[75294]: pgmap v345: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:04:06 compute-0 sudo[114009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkdemempkxlqxljefvpxndxajchvlfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846646.048053-178-12108703371307/AnsiballZ_service_facts.py'
Jan 31 08:04:06 compute-0 sudo[114009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:06 compute-0 python3.9[114011]: ansible-service_facts Invoked
Jan 31 08:04:06 compute-0 network[114028]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:04:06 compute-0 network[114029]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:04:06 compute-0 network[114030]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:04:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:07 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 08:04:07 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 08:04:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 08:04:07 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 08:04:08 compute-0 ceph-mon[75294]: pgmap v346: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:08 compute-0 ceph-mon[75294]: 10.1a scrub starts
Jan 31 08:04:08 compute-0 ceph-mon[75294]: 10.1a scrub ok
Jan 31 08:04:08 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 31 08:04:08 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 31 08:04:08 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 08:04:08 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 08:04:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:09 compute-0 ceph-mon[75294]: 11.d scrub starts
Jan 31 08:04:09 compute-0 ceph-mon[75294]: 11.d scrub ok
Jan 31 08:04:09 compute-0 ceph-mon[75294]: 10.2 scrub starts
Jan 31 08:04:09 compute-0 ceph-mon[75294]: 10.2 scrub ok
Jan 31 08:04:09 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 08:04:09 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 08:04:09 compute-0 sudo[114009]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-0 ceph-mon[75294]: 8.12 scrub starts
Jan 31 08:04:10 compute-0 ceph-mon[75294]: 8.12 scrub ok
Jan 31 08:04:10 compute-0 ceph-mon[75294]: pgmap v347: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 08:04:10 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 08:04:10 compute-0 sudo[114313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdecpxygcovlmcbpfccndzrrccfpgjwz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769846650.1671355-193-243470275606030/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769846650.1671355-193-243470275606030/args'
Jan 31 08:04:10 compute-0 sudo[114313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:10 compute-0 sudo[114313]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 08:04:10 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 08:04:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:11 compute-0 sudo[114480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuxqootgzrelrkdaycdsdpyrnnbornrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846650.7745593-204-68241923034597/AnsiballZ_dnf.py'
Jan 31 08:04:11 compute-0 sudo[114480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:11 compute-0 ceph-mon[75294]: 10.9 scrub starts
Jan 31 08:04:11 compute-0 ceph-mon[75294]: 10.9 scrub ok
Jan 31 08:04:11 compute-0 ceph-mon[75294]: 10.19 scrub starts
Jan 31 08:04:11 compute-0 ceph-mon[75294]: 10.19 scrub ok
Jan 31 08:04:11 compute-0 python3.9[114482]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:04:11 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 31 08:04:11 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 31 08:04:12 compute-0 ceph-mon[75294]: 9.1e scrub starts
Jan 31 08:04:12 compute-0 ceph-mon[75294]: 9.1e scrub ok
Jan 31 08:04:12 compute-0 ceph-mon[75294]: pgmap v348: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:12 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 31 08:04:12 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 31 08:04:12 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 31 08:04:12 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 31 08:04:12 compute-0 sudo[114480]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:13 compute-0 ceph-mon[75294]: 8.d scrub starts
Jan 31 08:04:13 compute-0 ceph-mon[75294]: 8.d scrub ok
Jan 31 08:04:13 compute-0 ceph-mon[75294]: 10.11 scrub starts
Jan 31 08:04:13 compute-0 ceph-mon[75294]: 10.11 scrub ok
Jan 31 08:04:13 compute-0 sudo[114633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjsyivgdpbsyrvowqyiujjqjesxiponn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846653.0207775-217-61397646013845/AnsiballZ_package_facts.py'
Jan 31 08:04:13 compute-0 sudo[114633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:13 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 31 08:04:13 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 31 08:04:13 compute-0 python3.9[114635]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 08:04:14 compute-0 ceph-mon[75294]: 9.1c scrub starts
Jan 31 08:04:14 compute-0 ceph-mon[75294]: 9.1c scrub ok
Jan 31 08:04:14 compute-0 ceph-mon[75294]: pgmap v349: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:14 compute-0 ceph-mon[75294]: 10.10 scrub starts
Jan 31 08:04:14 compute-0 ceph-mon[75294]: 10.10 scrub ok
Jan 31 08:04:14 compute-0 sudo[114633]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:14 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 08:04:14 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 08:04:14 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 31 08:04:14 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 31 08:04:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:14 compute-0 sudo[114785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmfoudmwukdozeyldojiqjrjmmrchhah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846654.5647106-227-68824711612382/AnsiballZ_stat.py'
Jan 31 08:04:14 compute-0 sudo[114785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:15 compute-0 ceph-mon[75294]: 10.12 scrub starts
Jan 31 08:04:15 compute-0 ceph-mon[75294]: 10.12 scrub ok
Jan 31 08:04:15 compute-0 ceph-mon[75294]: pgmap v350: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:15 compute-0 python3.9[114787]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:15 compute-0 sudo[114785]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:15 compute-0 sudo[114863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxsqpldaohdirffzbbyofrjmizlwrrpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846654.5647106-227-68824711612382/AnsiballZ_file.py'
Jan 31 08:04:15 compute-0 sudo[114863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:15 compute-0 python3.9[114865]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:15 compute-0 sudo[114863]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:15 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 31 08:04:15 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 31 08:04:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:16 compute-0 sudo[115015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldgdqlbsbqpiiaouiyqcgewtdrouuuqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846655.8108175-239-261202614719086/AnsiballZ_stat.py'
Jan 31 08:04:16 compute-0 sudo[115015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:16 compute-0 ceph-mon[75294]: 9.1b scrub starts
Jan 31 08:04:16 compute-0 ceph-mon[75294]: 9.1b scrub ok
Jan 31 08:04:16 compute-0 ceph-mon[75294]: 10.14 scrub starts
Jan 31 08:04:16 compute-0 ceph-mon[75294]: 10.14 scrub ok
Jan 31 08:04:16 compute-0 python3.9[115017]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:16 compute-0 sudo[115015]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:16 compute-0 sudo[115093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtlynmjsccdgmipxuepogmapfdzynmpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846655.8108175-239-261202614719086/AnsiballZ_file.py'
Jan 31 08:04:16 compute-0 sudo[115093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:16 compute-0 python3.9[115095]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:16 compute-0 sudo[115093]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:17 compute-0 ceph-mon[75294]: pgmap v351: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:17 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 08:04:17 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 08:04:17 compute-0 sudo[115245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyzxuykciwzpmzuvadhtzasqyubdomnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846657.1470585-257-280842145247186/AnsiballZ_lineinfile.py'
Jan 31 08:04:17 compute-0 sudo[115245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:17 compute-0 python3.9[115247]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:17 compute-0 sudo[115245]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:18 compute-0 ceph-mon[75294]: 11.2 scrub starts
Jan 31 08:04:18 compute-0 ceph-mon[75294]: 11.2 scrub ok
Jan 31 08:04:18 compute-0 sudo[115397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oppqouziqmazfgozsrvtkrrfxemkfxgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846658.2173088-272-68938801826507/AnsiballZ_setup.py'
Jan 31 08:04:18 compute-0 sudo[115397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:18 compute-0 python3.9[115399]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:04:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:18 compute-0 sudo[115397]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:19 compute-0 ceph-mon[75294]: pgmap v352: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:19 compute-0 sudo[115481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqggdahjfflzhmnmfxtvennruxvjepdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846658.2173088-272-68938801826507/AnsiballZ_systemd.py'
Jan 31 08:04:19 compute-0 sudo[115481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:19 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 31 08:04:19 compute-0 python3.9[115483]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:04:19 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 31 08:04:19 compute-0 sudo[115481]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:20 compute-0 ceph-mon[75294]: 9.15 scrub starts
Jan 31 08:04:20 compute-0 ceph-mon[75294]: 9.15 scrub ok
Jan 31 08:04:20 compute-0 sshd-session[110970]: Connection closed by 192.168.122.30 port 38382
Jan 31 08:04:20 compute-0 sshd-session[110967]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:04:20 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 08:04:20 compute-0 systemd[1]: session-38.scope: Consumed 20.664s CPU time.
Jan 31 08:04:20 compute-0 systemd-logind[810]: Session 38 logged out. Waiting for processes to exit.
Jan 31 08:04:20 compute-0 systemd-logind[810]: Removed session 38.
Jan 31 08:04:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:21 compute-0 ceph-mon[75294]: pgmap v353: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:21 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 31 08:04:21 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 31 08:04:22 compute-0 ceph-mon[75294]: 9.14 scrub starts
Jan 31 08:04:22 compute-0 ceph-mon[75294]: 9.14 scrub ok
Jan 31 08:04:22 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 08:04:22 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 08:04:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:23 compute-0 ceph-mon[75294]: 9.10 scrub starts
Jan 31 08:04:23 compute-0 ceph-mon[75294]: 9.10 scrub ok
Jan 31 08:04:23 compute-0 ceph-mon[75294]: pgmap v354: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:23 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 31 08:04:23 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 31 08:04:23 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 08:04:23 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 08:04:24 compute-0 ceph-mon[75294]: 11.1a scrub starts
Jan 31 08:04:24 compute-0 ceph-mon[75294]: 11.1a scrub ok
Jan 31 08:04:24 compute-0 ceph-mon[75294]: 9.2 scrub starts
Jan 31 08:04:24 compute-0 ceph-mon[75294]: 9.2 scrub ok
Jan 31 08:04:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:24 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 08:04:24 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 08:04:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:25 compute-0 ceph-mon[75294]: 9.a scrub starts
Jan 31 08:04:25 compute-0 ceph-mon[75294]: 9.a scrub ok
Jan 31 08:04:25 compute-0 ceph-mon[75294]: pgmap v355: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 31 08:04:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:25 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 31 08:04:26 compute-0 ceph-mon[75294]: 9.0 scrub starts
Jan 31 08:04:26 compute-0 ceph-mon[75294]: 9.0 scrub ok
Jan 31 08:04:26 compute-0 sshd-session[115510]: Accepted publickey for zuul from 192.168.122.30 port 43172 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:04:26 compute-0 systemd-logind[810]: New session 39 of user zuul.
Jan 31 08:04:26 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 31 08:04:26 compute-0 sshd-session[115510]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:04:26 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 08:04:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:26 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 08:04:27 compute-0 sudo[115663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcpplwrwigklcsovqkttwgqrioaydxph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846666.9391406-17-202536362452660/AnsiballZ_file.py'
Jan 31 08:04:27 compute-0 sudo[115663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:27 compute-0 ceph-mon[75294]: 9.4 scrub starts
Jan 31 08:04:27 compute-0 ceph-mon[75294]: pgmap v356: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:27 compute-0 ceph-mon[75294]: 9.4 scrub ok
Jan 31 08:04:27 compute-0 python3.9[115665]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:27 compute-0 sudo[115663]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 31 08:04:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 31 08:04:28 compute-0 sudo[115815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsnacgixxihrntgrnrjtjobhmljgfqvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846667.6924253-29-8360680701743/AnsiballZ_stat.py'
Jan 31 08:04:28 compute-0 sudo[115815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:28 compute-0 python3.9[115817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:28 compute-0 sudo[115815]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:28 compute-0 sudo[115893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayzapzfabvpdxuvqixxypmwbhhwqwmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846667.6924253-29-8360680701743/AnsiballZ_file.py'
Jan 31 08:04:28 compute-0 sudo[115893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:28 compute-0 ceph-mon[75294]: 8.2 scrub starts
Jan 31 08:04:28 compute-0 ceph-mon[75294]: 8.2 scrub ok
Jan 31 08:04:28 compute-0 python3.9[115895]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:28 compute-0 sudo[115893]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:28 compute-0 sshd-session[115513]: Connection closed by 192.168.122.30 port 43172
Jan 31 08:04:28 compute-0 sshd-session[115510]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:04:28 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 08:04:28 compute-0 systemd[1]: session-39.scope: Consumed 1.260s CPU time.
Jan 31 08:04:28 compute-0 systemd-logind[810]: Session 39 logged out. Waiting for processes to exit.
Jan 31 08:04:28 compute-0 systemd-logind[810]: Removed session 39.
Jan 31 08:04:29 compute-0 ceph-mon[75294]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:30 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 08:04:30 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 08:04:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:30 compute-0 ceph-mon[75294]: 9.1a scrub starts
Jan 31 08:04:30 compute-0 ceph-mon[75294]: 9.1a scrub ok
Jan 31 08:04:31 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 31 08:04:31 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 31 08:04:32 compute-0 ceph-mon[75294]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:33 compute-0 ceph-mon[75294]: 9.1d scrub starts
Jan 31 08:04:33 compute-0 ceph-mon[75294]: 9.1d scrub ok
Jan 31 08:04:34 compute-0 ceph-mon[75294]: pgmap v359: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:34 compute-0 sshd-session[115920]: Accepted publickey for zuul from 192.168.122.30 port 36258 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:04:34 compute-0 systemd-logind[810]: New session 40 of user zuul.
Jan 31 08:04:34 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 31 08:04:34 compute-0 sshd-session[115920]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:04:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:35 compute-0 python3.9[116073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:04:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 31 08:04:35 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 31 08:04:35 compute-0 sudo[116154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:35 compute-0 sudo[116154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:35 compute-0 sudo[116154]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:35 compute-0 sudo[116183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:04:35 compute-0 sudo[116183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:35 compute-0 sudo[116277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdckdbcqsgfljzsczngxbhsszwxauobc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846675.4657946-28-44671600175993/AnsiballZ_file.py'
Jan 31 08:04:35 compute-0 sudo[116277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:36 compute-0 python3.9[116279]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:36 compute-0 ceph-mon[75294]: pgmap v360: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:36 compute-0 sudo[116277]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:36 compute-0 sudo[116183]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:04:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:04:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:04:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:04:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:04:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:04:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:04:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:04:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:04:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:04:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:04:36 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:04:36 compute-0 sudo[116358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:36 compute-0 sudo[116358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:36 compute-0 sudo[116358]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:36 compute-0 sudo[116416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:04:36 compute-0 sudo[116416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:36 compute-0 podman[116472]: 2026-01-31 08:04:36.541594872 +0000 UTC m=+0.034093517 container create ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gauss, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:36 compute-0 systemd[1]: Started libpod-conmon-ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3.scope.
Jan 31 08:04:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:36 compute-0 podman[116472]: 2026-01-31 08:04:36.601013578 +0000 UTC m=+0.093512243 container init ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gauss, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:04:36 compute-0 podman[116472]: 2026-01-31 08:04:36.605276857 +0000 UTC m=+0.097775502 container start ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:04:36 compute-0 podman[116472]: 2026-01-31 08:04:36.60857228 +0000 UTC m=+0.101070945 container attach ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:04:36 compute-0 interesting_gauss[116508]: 167 167
Jan 31 08:04:36 compute-0 systemd[1]: libpod-ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3.scope: Deactivated successfully.
Jan 31 08:04:36 compute-0 podman[116472]: 2026-01-31 08:04:36.610488754 +0000 UTC m=+0.102987399 container died ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gauss, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:04:36 compute-0 podman[116472]: 2026-01-31 08:04:36.526023325 +0000 UTC m=+0.018521990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-479168c336ce8237e0d72b98403ce8b760dfd79117762e9571db404e13563a4b-merged.mount: Deactivated successfully.
Jan 31 08:04:36 compute-0 podman[116472]: 2026-01-31 08:04:36.655932449 +0000 UTC m=+0.148431094 container remove ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gauss, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 31 08:04:36 compute-0 systemd[1]: libpod-conmon-ccbfb4204846651ff07c030acdca37785b73a653787d4c61030fd3752b0090e3.scope: Deactivated successfully.
Jan 31 08:04:36 compute-0 sudo[116595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nspdtwkfjewxwxjpkxkipxjuiwpakwfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846676.2003105-36-246282564815931/AnsiballZ_stat.py'
Jan 31 08:04:36 compute-0 sudo[116595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:36 compute-0 podman[116577]: 2026-01-31 08:04:36.769822902 +0000 UTC m=+0.035197168 container create aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bhaskara, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:04:36 compute-0 systemd[1]: Started libpod-conmon-aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958.scope.
Jan 31 08:04:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1791b81b16377b0f5a66ed0fc35adb3c6d505d4a5792e822cd272f28dcc4c979/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1791b81b16377b0f5a66ed0fc35adb3c6d505d4a5792e822cd272f28dcc4c979/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1791b81b16377b0f5a66ed0fc35adb3c6d505d4a5792e822cd272f28dcc4c979/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1791b81b16377b0f5a66ed0fc35adb3c6d505d4a5792e822cd272f28dcc4c979/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1791b81b16377b0f5a66ed0fc35adb3c6d505d4a5792e822cd272f28dcc4c979/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:36 compute-0 podman[116577]: 2026-01-31 08:04:36.846186513 +0000 UTC m=+0.111560779 container init aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:04:36 compute-0 podman[116577]: 2026-01-31 08:04:36.753559956 +0000 UTC m=+0.018934242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:04:36 compute-0 podman[116577]: 2026-01-31 08:04:36.854124446 +0000 UTC m=+0.119498722 container start aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:04:36 compute-0 podman[116577]: 2026-01-31 08:04:36.857688606 +0000 UTC m=+0.123062882 container attach aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:04:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:36 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 08:04:36 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 08:04:36 compute-0 python3.9[116603]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:36 compute-0 sudo[116595]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:37 compute-0 ceph-mon[75294]: 8.15 scrub starts
Jan 31 08:04:37 compute-0 ceph-mon[75294]: 8.15 scrub ok
Jan 31 08:04:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:04:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:04:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:04:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:04:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:04:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:04:37 compute-0 ceph-mon[75294]: 9.12 scrub starts
Jan 31 08:04:37 compute-0 ceph-mon[75294]: 9.12 scrub ok
Jan 31 08:04:37 compute-0 sudo[116695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bryrdjodwiacoeutjcbsfmbdjdbvlzbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846676.2003105-36-246282564815931/AnsiballZ_file.py'
Jan 31 08:04:37 compute-0 sudo[116695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:37 compute-0 loving_bhaskara[116608]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:04:37 compute-0 loving_bhaskara[116608]: --> All data devices are unavailable
Jan 31 08:04:37 compute-0 systemd[1]: libpod-aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958.scope: Deactivated successfully.
Jan 31 08:04:37 compute-0 conmon[116608]: conmon aa2ee2bff71bc9978918 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958.scope/container/memory.events
Jan 31 08:04:37 compute-0 podman[116577]: 2026-01-31 08:04:37.260061579 +0000 UTC m=+0.525435845 container died aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1791b81b16377b0f5a66ed0fc35adb3c6d505d4a5792e822cd272f28dcc4c979-merged.mount: Deactivated successfully.
Jan 31 08:04:37 compute-0 podman[116577]: 2026-01-31 08:04:37.299931328 +0000 UTC m=+0.565305594 container remove aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:04:37 compute-0 systemd[1]: libpod-conmon-aa2ee2bff71bc997891832debc2a3d878d10a427c16044994763f53907044958.scope: Deactivated successfully.
Jan 31 08:04:37 compute-0 python3.9[116697]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.cyye0oig recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:37 compute-0 sudo[116416]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:37 compute-0 sudo[116695]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:37 compute-0 sudo[116719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:37 compute-0 sudo[116719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:37 compute-0 sudo[116719]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:37 compute-0 sudo[116768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:04:37 compute-0 sudo[116768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:37 compute-0 podman[116823]: 2026-01-31 08:04:37.67444563 +0000 UTC m=+0.044831588 container create 8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_noether, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:04:37 compute-0 systemd[1]: Started libpod-conmon-8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993.scope.
Jan 31 08:04:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:37 compute-0 podman[116823]: 2026-01-31 08:04:37.736683685 +0000 UTC m=+0.107069673 container init 8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:04:37 compute-0 podman[116823]: 2026-01-31 08:04:37.742259331 +0000 UTC m=+0.112645289 container start 8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:04:37 compute-0 unruffled_noether[116875]: 167 167
Jan 31 08:04:37 compute-0 systemd[1]: libpod-8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993.scope: Deactivated successfully.
Jan 31 08:04:37 compute-0 conmon[116875]: conmon 8f8ab3d45bba93cd6093 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993.scope/container/memory.events
Jan 31 08:04:37 compute-0 podman[116823]: 2026-01-31 08:04:37.747006925 +0000 UTC m=+0.117392883 container attach 8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:04:37 compute-0 podman[116823]: 2026-01-31 08:04:37.747320383 +0000 UTC m=+0.117706341 container died 8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:04:37 compute-0 podman[116823]: 2026-01-31 08:04:37.656450545 +0000 UTC m=+0.026836533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e68ace6e90b6a4b2fba6c189694849046c4037d631bdb4b11903c847ca43a0d4-merged.mount: Deactivated successfully.
Jan 31 08:04:37 compute-0 podman[116823]: 2026-01-31 08:04:37.784325511 +0000 UTC m=+0.154711469 container remove 8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_noether, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:37 compute-0 systemd[1]: libpod-conmon-8f8ab3d45bba93cd6093089b077d45fee8cd51c41d76ffc911f2a49d98c82993.scope: Deactivated successfully.
Jan 31 08:04:37 compute-0 sudo[116970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwvdgzdlkytohajmsvojbjbhbiujkhrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846677.6266994-56-192014643407902/AnsiballZ_stat.py'
Jan 31 08:04:37 compute-0 sudo[116970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:37 compute-0 podman[116974]: 2026-01-31 08:04:37.893711689 +0000 UTC m=+0.033928073 container create 32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dewdney, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:04:37 compute-0 systemd[1]: Started libpod-conmon-32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f.scope.
Jan 31 08:04:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb4c46fad95dc5784deb89d8282027c6640db8ba2b515413d67a7b012b35936/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb4c46fad95dc5784deb89d8282027c6640db8ba2b515413d67a7b012b35936/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb4c46fad95dc5784deb89d8282027c6640db8ba2b515413d67a7b012b35936/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb4c46fad95dc5784deb89d8282027c6640db8ba2b515413d67a7b012b35936/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:37 compute-0 podman[116974]: 2026-01-31 08:04:37.879058737 +0000 UTC m=+0.019275141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:04:37 compute-0 podman[116974]: 2026-01-31 08:04:37.981183311 +0000 UTC m=+0.121399715 container init 32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:04:37 compute-0 podman[116974]: 2026-01-31 08:04:37.990348869 +0000 UTC m=+0.130565253 container start 32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dewdney, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:04:37 compute-0 podman[116974]: 2026-01-31 08:04:37.994322119 +0000 UTC m=+0.134538503 container attach 32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:04:38 compute-0 python3.9[116982]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:38 compute-0 ceph-mon[75294]: pgmap v361: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:38 compute-0 sudo[116970]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:38 compute-0 sudo[117076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slepfpqlxhtywouwfxhqnifdmlzmysht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846677.6266994-56-192014643407902/AnsiballZ_file.py'
Jan 31 08:04:38 compute-0 sudo[117076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:38 compute-0 silly_dewdney[116992]: {
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:     "0": [
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:         {
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "devices": [
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "/dev/loop3"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             ],
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_name": "ceph_lv0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_size": "21470642176",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "name": "ceph_lv0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "tags": {
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cluster_name": "ceph",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.crush_device_class": "",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.encrypted": "0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.objectstore": "bluestore",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osd_id": "0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.type": "block",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.vdo": "0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.with_tpm": "0"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             },
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "type": "block",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "vg_name": "ceph_vg0"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:         }
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:     ],
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:     "1": [
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:         {
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "devices": [
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "/dev/loop4"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             ],
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_name": "ceph_lv1",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_size": "21470642176",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "name": "ceph_lv1",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "tags": {
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cluster_name": "ceph",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.crush_device_class": "",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.encrypted": "0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.objectstore": "bluestore",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osd_id": "1",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.type": "block",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.vdo": "0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.with_tpm": "0"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             },
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "type": "block",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "vg_name": "ceph_vg1"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:         }
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:     ],
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:     "2": [
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:         {
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "devices": [
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "/dev/loop5"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             ],
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_name": "ceph_lv2",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_size": "21470642176",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "name": "ceph_lv2",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "tags": {
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.cluster_name": "ceph",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.crush_device_class": "",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.encrypted": "0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.objectstore": "bluestore",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osd_id": "2",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.type": "block",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.vdo": "0",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:                 "ceph.with_tpm": "0"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             },
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "type": "block",
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:             "vg_name": "ceph_vg2"
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:         }
Jan 31 08:04:38 compute-0 silly_dewdney[116992]:     ]
Jan 31 08:04:38 compute-0 silly_dewdney[116992]: }
Jan 31 08:04:38 compute-0 systemd[1]: libpod-32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f.scope: Deactivated successfully.
Jan 31 08:04:38 compute-0 podman[116974]: 2026-01-31 08:04:38.271634977 +0000 UTC m=+0.411851431 container died 32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb4c46fad95dc5784deb89d8282027c6640db8ba2b515413d67a7b012b35936-merged.mount: Deactivated successfully.
Jan 31 08:04:38 compute-0 podman[116974]: 2026-01-31 08:04:38.318062549 +0000 UTC m=+0.458278933 container remove 32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:04:38 compute-0 systemd[1]: libpod-conmon-32fe199698d864288cfacca18e7da6bfea6a8258b632064e85b0de842116731f.scope: Deactivated successfully.
Jan 31 08:04:38 compute-0 sudo[116768]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:38 compute-0 sudo[117092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:38 compute-0 sudo[117092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:38 compute-0 sudo[117092]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:38 compute-0 sudo[117117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:04:38 compute-0 sudo[117117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:38 compute-0 python3.9[117078]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.mp4x3fzw recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:38 compute-0 sudo[117076]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:38 compute-0 podman[117200]: 2026-01-31 08:04:38.699047542 +0000 UTC m=+0.039311473 container create 052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hamilton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:04:38 compute-0 systemd[1]: Started libpod-conmon-052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be.scope.
Jan 31 08:04:38 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 08:04:38 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 08:04:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:38 compute-0 podman[117200]: 2026-01-31 08:04:38.777072331 +0000 UTC m=+0.117336282 container init 052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:04:38 compute-0 podman[117200]: 2026-01-31 08:04:38.680598415 +0000 UTC m=+0.020862356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:04:38 compute-0 podman[117200]: 2026-01-31 08:04:38.783767358 +0000 UTC m=+0.124031299 container start 052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:04:38 compute-0 podman[117200]: 2026-01-31 08:04:38.786794142 +0000 UTC m=+0.127058103 container attach 052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:04:38 compute-0 hardcore_hamilton[117263]: 167 167
Jan 31 08:04:38 compute-0 systemd[1]: libpod-052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be.scope: Deactivated successfully.
Jan 31 08:04:38 compute-0 podman[117200]: 2026-01-31 08:04:38.789056536 +0000 UTC m=+0.129320467 container died 052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6696aff4cddadfe14dc6da1905d003b4f89aa326bddf3671212a649eba509ed-merged.mount: Deactivated successfully.
Jan 31 08:04:38 compute-0 podman[117200]: 2026-01-31 08:04:38.824940113 +0000 UTC m=+0.165204054 container remove 052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:38 compute-0 systemd[1]: libpod-conmon-052d82af70ad1197beb51f03840b35ce342eb0756aa9a512a1c32948f9f000be.scope: Deactivated successfully.
Jan 31 08:04:38 compute-0 sudo[117337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylalssbzbjyhczoinxzciefrgqhkpwia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846678.6391037-69-208504697702097/AnsiballZ_file.py'
Jan 31 08:04:38 compute-0 sudo[117337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:38 compute-0 podman[117345]: 2026-01-31 08:04:38.954769323 +0000 UTC m=+0.041290959 container create 2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:38 compute-0 systemd[1]: Started libpod-conmon-2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a.scope.
Jan 31 08:04:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4197b057b436e3a4ec1d66624faa40566a32ebbaf0975e118e33e8224fe9f63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4197b057b436e3a4ec1d66624faa40566a32ebbaf0975e118e33e8224fe9f63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4197b057b436e3a4ec1d66624faa40566a32ebbaf0975e118e33e8224fe9f63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4197b057b436e3a4ec1d66624faa40566a32ebbaf0975e118e33e8224fe9f63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:39 compute-0 podman[117345]: 2026-01-31 08:04:38.937478508 +0000 UTC m=+0.024000154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:04:39 compute-0 podman[117345]: 2026-01-31 08:04:39.040679233 +0000 UTC m=+0.127200889 container init 2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:04:39 compute-0 podman[117345]: 2026-01-31 08:04:39.047507054 +0000 UTC m=+0.134028690 container start 2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:04:39 compute-0 podman[117345]: 2026-01-31 08:04:39.051918468 +0000 UTC m=+0.138440124 container attach 2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:04:39 compute-0 python3.9[117339]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:04:39 compute-0 ceph-mon[75294]: pgmap v362: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:39 compute-0 sudo[117337]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:39 compute-0 sudo[117572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcwmiwhrmfajrznpficwqfebynavzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846679.2211769-77-260168508678840/AnsiballZ_stat.py'
Jan 31 08:04:39 compute-0 sudo[117572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:39 compute-0 lvm[117591]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:04:39 compute-0 lvm[117591]: VG ceph_vg0 finished
Jan 31 08:04:39 compute-0 lvm[117592]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:04:39 compute-0 lvm[117592]: VG ceph_vg1 finished
Jan 31 08:04:39 compute-0 lvm[117594]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:04:39 compute-0 lvm[117594]: VG ceph_vg2 finished
Jan 31 08:04:39 compute-0 python3.9[117576]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:39 compute-0 sudo[117572]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:39 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 31 08:04:39 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 31 08:04:39 compute-0 fervent_brahmagupta[117361]: {}
Jan 31 08:04:39 compute-0 systemd[1]: libpod-2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a.scope: Deactivated successfully.
Jan 31 08:04:39 compute-0 systemd[1]: libpod-2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a.scope: Consumed 1.045s CPU time.
Jan 31 08:04:39 compute-0 podman[117345]: 2026-01-31 08:04:39.775863129 +0000 UTC m=+0.862384765 container died 2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4197b057b436e3a4ec1d66624faa40566a32ebbaf0975e118e33e8224fe9f63-merged.mount: Deactivated successfully.
Jan 31 08:04:39 compute-0 podman[117345]: 2026-01-31 08:04:39.810461259 +0000 UTC m=+0.896982895 container remove 2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:04:39 compute-0 systemd[1]: libpod-conmon-2b7f1dbfa54c9682be638522e34a2db043bba5f3850d01cc7fce180d657aaa8a.scope: Deactivated successfully.
Jan 31 08:04:39 compute-0 sudo[117117]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:04:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:04:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:04:39 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:04:39 compute-0 sudo[117682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbkeaabgcdytgmujnbeeharhpspvnbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846679.2211769-77-260168508678840/AnsiballZ_file.py'
Jan 31 08:04:39 compute-0 sudo[117682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:39 compute-0 sudo[117684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:04:39 compute-0 sudo[117684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:39 compute-0 sudo[117684]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:40 compute-0 ceph-mon[75294]: 11.b scrub starts
Jan 31 08:04:40 compute-0 ceph-mon[75294]: 11.b scrub ok
Jan 31 08:04:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:04:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:04:40 compute-0 python3.9[117685]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:04:40 compute-0 sudo[117682]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:40 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 31 08:04:40 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 31 08:04:40 compute-0 sudo[117859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fniyqwovwkgvnsfcncnimtnomxkhkbqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846680.204135-77-146072031528327/AnsiballZ_stat.py'
Jan 31 08:04:40 compute-0 sudo[117859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:40 compute-0 python3.9[117861]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:40 compute-0 sudo[117859]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:40 compute-0 sudo[117937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdprkfcqpgnektbtshtldrmmoqxwulox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846680.204135-77-146072031528327/AnsiballZ_file.py'
Jan 31 08:04:40 compute-0 sudo[117937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:41 compute-0 ceph-mon[75294]: 11.3 scrub starts
Jan 31 08:04:41 compute-0 ceph-mon[75294]: 11.3 scrub ok
Jan 31 08:04:41 compute-0 ceph-mon[75294]: pgmap v363: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:41 compute-0 python3.9[117939]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:04:41 compute-0 sudo[117937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:41 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 31 08:04:41 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 31 08:04:41 compute-0 sudo[118089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udvihcnnbjkjztwbqzsupldzggkjazke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846681.2758133-100-248826648943279/AnsiballZ_file.py'
Jan 31 08:04:41 compute-0 sudo[118089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:41 compute-0 python3.9[118091]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:41 compute-0 sudo[118089]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:42 compute-0 ceph-mon[75294]: 9.3 scrub starts
Jan 31 08:04:42 compute-0 ceph-mon[75294]: 9.3 scrub ok
Jan 31 08:04:42 compute-0 sudo[118241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsxihngvrffbtolppsmikqejxjbibnlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846681.9052129-108-92151451341913/AnsiballZ_stat.py'
Jan 31 08:04:42 compute-0 sudo[118241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:42 compute-0 python3.9[118243]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:42 compute-0 sudo[118241]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:42 compute-0 sudo[118319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zktdtltjiqrjzuppkmqqnlounzvlzvhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846681.9052129-108-92151451341913/AnsiballZ_file.py'
Jan 31 08:04:42 compute-0 sudo[118319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:42 compute-0 python3.9[118321]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:42 compute-0 sudo[118319]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:42 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 08:04:42 compute-0 ceph-osd[86929]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 08:04:43 compute-0 ceph-mon[75294]: 9.1 scrub starts
Jan 31 08:04:43 compute-0 ceph-mon[75294]: 9.1 scrub ok
Jan 31 08:04:43 compute-0 ceph-mon[75294]: pgmap v364: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:43 compute-0 ceph-mon[75294]: 9.1f scrub starts
Jan 31 08:04:43 compute-0 sudo[118471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkwftvatwpkdnqmvildqvbgqufypggnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846682.8740413-120-213572208207028/AnsiballZ_stat.py'
Jan 31 08:04:43 compute-0 sudo[118471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:43 compute-0 python3.9[118473]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:43 compute-0 sudo[118471]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:43 compute-0 sudo[118549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnqqmmjnnusijxbvwxxdppdcwltmyfmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846682.8740413-120-213572208207028/AnsiballZ_file.py'
Jan 31 08:04:43 compute-0 sudo[118549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:43 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 31 08:04:43 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 31 08:04:43 compute-0 python3.9[118551]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:43 compute-0 sudo[118549]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:44 compute-0 ceph-mon[75294]: 9.1f scrub ok
Jan 31 08:04:44 compute-0 sudo[118701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svfobsytdiiyojbzjilclokqdiqxrlof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846683.9109674-132-243654625903765/AnsiballZ_systemd.py'
Jan 31 08:04:44 compute-0 sudo[118701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:44 compute-0 python3.9[118703]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:04:44 compute-0 systemd[1]: Reloading.
Jan 31 08:04:44 compute-0 systemd-rc-local-generator[118727]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:04:44 compute-0 systemd-sysv-generator[118732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:04:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:45 compute-0 sudo[118701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:45 compute-0 ceph-mon[75294]: 11.8 scrub starts
Jan 31 08:04:45 compute-0 ceph-mon[75294]: 11.8 scrub ok
Jan 31 08:04:45 compute-0 ceph-mon[75294]: pgmap v365: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:45 compute-0 sudo[118890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdswkxghqieinyhkeuxlrztgtwmuopis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846685.148327-140-236029439054107/AnsiballZ_stat.py'
Jan 31 08:04:45 compute-0 sudo[118890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:45 compute-0 python3.9[118892]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:45 compute-0 sudo[118890]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:45 compute-0 sudo[118968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsubemeybstidbnjkhmbgfjmwhascsnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846685.148327-140-236029439054107/AnsiballZ_file.py'
Jan 31 08:04:45 compute-0 sudo[118968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:45 compute-0 python3.9[118970]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:45 compute-0 sudo[118968]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:46 compute-0 sudo[119120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agqvfnwmzymfshsjbgtvcwotuzsjeuli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846686.0930443-152-199165977547050/AnsiballZ_stat.py'
Jan 31 08:04:46 compute-0 sudo[119120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:46 compute-0 python3.9[119122]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:46 compute-0 sudo[119120]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:46 compute-0 sudo[119198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfaeqjjspdwogravwbrwxyyvbbpktvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846686.0930443-152-199165977547050/AnsiballZ_file.py'
Jan 31 08:04:46 compute-0 sudo[119198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:46 compute-0 python3.9[119200]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:46 compute-0 sudo[119198]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:47 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 31 08:04:47 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 31 08:04:47 compute-0 sudo[119350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svuwgzwroyitbcengqpukhiydqshfbpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846686.9844947-164-113262797172226/AnsiballZ_systemd.py'
Jan 31 08:04:47 compute-0 sudo[119350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:47 compute-0 python3.9[119352]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:04:47 compute-0 systemd[1]: Reloading.
Jan 31 08:04:47 compute-0 systemd-sysv-generator[119383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:04:47 compute-0 systemd-rc-local-generator[119379]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:04:47 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 08:04:47 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 08:04:47 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 08:04:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 08:04:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 08:04:47 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 08:04:47 compute-0 sudo[119350]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:47 compute-0 ceph-mon[75294]: pgmap v366: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:48 compute-0 python3.9[119544]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:04:48 compute-0 network[119561]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:04:48 compute-0 network[119562]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:04:48 compute-0 network[119563]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:04:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:48 compute-0 ceph-mon[75294]: 9.d scrub starts
Jan 31 08:04:48 compute-0 ceph-mon[75294]: 9.d scrub ok
Jan 31 08:04:48 compute-0 ceph-mon[75294]: 8.4 scrub starts
Jan 31 08:04:48 compute-0 ceph-mon[75294]: 8.4 scrub ok
Jan 31 08:04:49 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 08:04:49 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 08:04:49 compute-0 ceph-mon[75294]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:04:50
Jan 31 08:04:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:04:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:04:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'backups']
Jan 31 08:04:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:04:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:50 compute-0 ceph-mon[75294]: 11.9 scrub starts
Jan 31 08:04:50 compute-0 ceph-mon[75294]: 11.9 scrub ok
Jan 31 08:04:51 compute-0 ceph-mon[75294]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:52 compute-0 sudo[119823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuqmbrgbahwystxjsttvksqiejyrozno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846692.0375373-190-272644421296656/AnsiballZ_stat.py'
Jan 31 08:04:52 compute-0 sudo[119823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:52 compute-0 python3.9[119825]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:52 compute-0 sudo[119823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:52 compute-0 sudo[119901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loyxllwozjlnsaatlairxkgapiospnky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846692.0375373-190-272644421296656/AnsiballZ_file.py'
Jan 31 08:04:52 compute-0 sudo[119901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:52 compute-0 python3.9[119903]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:52 compute-0 sudo[119901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:53 compute-0 sudo[120053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gixuaswwjxefkysxoyazxotbtzlgppea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846692.995945-203-45278807798937/AnsiballZ_file.py'
Jan 31 08:04:53 compute-0 sudo[120053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:53 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 31 08:04:53 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 31 08:04:53 compute-0 python3.9[120055]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:53 compute-0 sudo[120053]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:53 compute-0 sudo[120205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lygifbirnaplteimloyojjayyhuniwot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846693.5676022-211-175208719192065/AnsiballZ_stat.py'
Jan 31 08:04:53 compute-0 sudo[120205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:53 compute-0 python3.9[120207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:53 compute-0 sudo[120205]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:53 compute-0 ceph-mon[75294]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:54 compute-0 sudo[120283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trwrcpwhhyrshcpppgcpfukchzblkcud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846693.5676022-211-175208719192065/AnsiballZ_file.py'
Jan 31 08:04:54 compute-0 sudo[120283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:54 compute-0 python3.9[120285]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:54 compute-0 sudo[120283]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:54 compute-0 sudo[120435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piliksxtvcpjrwvnyyiyrrmdszyjgtds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846694.5363173-226-224891355014884/AnsiballZ_timezone.py'
Jan 31 08:04:54 compute-0 sudo[120435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:54 compute-0 ceph-mon[75294]: 9.9 scrub starts
Jan 31 08:04:54 compute-0 ceph-mon[75294]: 9.9 scrub ok
Jan 31 08:04:55 compute-0 python3.9[120437]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 08:04:55 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 08:04:55 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 08:04:55 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 31 08:04:55 compute-0 sudo[120435]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:55 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:04:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:04:55 compute-0 sudo[120591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htutghuhxxckjcgpjckjzuxvxeecvwjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846695.4061866-235-56249732395767/AnsiballZ_file.py'
Jan 31 08:04:55 compute-0 sudo[120591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:55 compute-0 python3.9[120593]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:55 compute-0 sudo[120591]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:04:55 compute-0 ceph-mon[75294]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:56 compute-0 sudo[120743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loepnrwqgmxquitlulypqbxxhmkdyryx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846695.9443853-243-215967725871869/AnsiballZ_stat.py'
Jan 31 08:04:56 compute-0 sudo[120743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:56 compute-0 python3.9[120745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:56 compute-0 sudo[120743]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:56 compute-0 sudo[120821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcprszbotoptskgfqgmycsfsoxxeikzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846695.9443853-243-215967725871869/AnsiballZ_file.py'
Jan 31 08:04:56 compute-0 sudo[120821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:56 compute-0 python3.9[120823]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:56 compute-0 sudo[120821]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:57 compute-0 ceph-mon[75294]: 9.16 scrub starts
Jan 31 08:04:57 compute-0 ceph-mon[75294]: 9.16 scrub ok
Jan 31 08:04:57 compute-0 sudo[120973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxdyzgrtbajgzplhgyohiiuialkyxxko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846696.9247818-255-97502281780117/AnsiballZ_stat.py'
Jan 31 08:04:57 compute-0 sudo[120973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:57 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 31 08:04:57 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 31 08:04:57 compute-0 python3.9[120975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:57 compute-0 sudo[120973]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:57 compute-0 sudo[121051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrieglralmjxmytnagvjtylbxurxazih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846696.9247818-255-97502281780117/AnsiballZ_file.py'
Jan 31 08:04:57 compute-0 sudo[121051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:57 compute-0 python3.9[121053]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ohy3ghn8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:57 compute-0 sudo[121051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:58 compute-0 ceph-mon[75294]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:58 compute-0 sudo[121203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgxgcsdehykzepuvxwfepobgeufartrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846697.9076195-267-154599697979087/AnsiballZ_stat.py'
Jan 31 08:04:58 compute-0 sudo[121203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:58 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 31 08:04:58 compute-0 python3.9[121205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:58 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 31 08:04:58 compute-0 sudo[121203]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:58 compute-0 sudo[121281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eesmrxfrfxqkuvdqaysyrfficqllijwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846697.9076195-267-154599697979087/AnsiballZ_file.py'
Jan 31 08:04:58 compute-0 sudo[121281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:58 compute-0 python3.9[121283]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:58 compute-0 sudo[121281]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:04:59 compute-0 ceph-mon[75294]: 9.b scrub starts
Jan 31 08:04:59 compute-0 ceph-mon[75294]: 9.b scrub ok
Jan 31 08:04:59 compute-0 sudo[121433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxownglrvqtnqucxvrphjhqbrhrybxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846698.9404325-280-191426019081762/AnsiballZ_command.py'
Jan 31 08:04:59 compute-0 sudo[121433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:59 compute-0 python3.9[121435]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:59 compute-0 sudo[121433]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:00 compute-0 sudo[121586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdsgyirfbcrwhkbzhldqcxycsfiesvjf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846699.660203-288-42400880875139/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 08:05:00 compute-0 sudo[121586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:00 compute-0 ceph-mon[75294]: 9.5 scrub starts
Jan 31 08:05:00 compute-0 ceph-mon[75294]: 9.5 scrub ok
Jan 31 08:05:00 compute-0 ceph-mon[75294]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:00 compute-0 python3[121588]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 08:05:00 compute-0 sudo[121586]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:00 compute-0 sudo[121738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zomygxrpwtwpqmdwjqlmddkvahkcecht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846700.3888602-296-53168513744884/AnsiballZ_stat.py'
Jan 31 08:05:00 compute-0 sudo[121738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 31 08:05:00 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 31 08:05:00 compute-0 python3.9[121740]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:00 compute-0 sudo[121738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:01 compute-0 sudo[121816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltmsbvrgucismolhujpobrhwrctktyod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846700.3888602-296-53168513744884/AnsiballZ_file.py'
Jan 31 08:05:01 compute-0 sudo[121816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:01 compute-0 ceph-mon[75294]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:01 compute-0 python3.9[121818]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:01 compute-0 sudo[121816]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 31 08:05:01 compute-0 ceph-osd[85864]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 31 08:05:01 compute-0 sudo[121968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgepsdtpekgbzrseqifpkwsauvdlsfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846701.4562986-308-268826117735530/AnsiballZ_stat.py'
Jan 31 08:05:01 compute-0 sudo[121968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:02 compute-0 python3.9[121970]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:02 compute-0 sudo[121968]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:02 compute-0 ceph-mon[75294]: 9.8 scrub starts
Jan 31 08:05:02 compute-0 ceph-mon[75294]: 9.8 scrub ok
Jan 31 08:05:02 compute-0 ceph-mon[75294]: 9.11 scrub starts
Jan 31 08:05:02 compute-0 ceph-mon[75294]: 9.11 scrub ok
Jan 31 08:05:02 compute-0 sudo[122093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clybpiqypiqndkeshzlqrcwqjudtgzxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846701.4562986-308-268826117735530/AnsiballZ_copy.py'
Jan 31 08:05:02 compute-0 sudo[122093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:02 compute-0 python3.9[122095]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846701.4562986-308-268826117735530/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:02 compute-0 sudo[122093]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:03 compute-0 sudo[122245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgerxbbcgvvlyrwrgamfhnqdmfupjiug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846702.8036885-323-4567819320800/AnsiballZ_stat.py'
Jan 31 08:05:03 compute-0 sudo[122245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:03 compute-0 python3.9[122247]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:03 compute-0 sudo[122245]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:03 compute-0 ceph-mon[75294]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:03 compute-0 sudo[122323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szmdlvbnnikpityjzzzqsuuqzwowjgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846702.8036885-323-4567819320800/AnsiballZ_file.py'
Jan 31 08:05:03 compute-0 sudo[122323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:03 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 31 08:05:03 compute-0 python3.9[122325]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:03 compute-0 sudo[122323]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:03 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 31 08:05:04 compute-0 sudo[122475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydqsidqldlarynuowpenyfyplmnnjyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846703.8198578-335-129979338426903/AnsiballZ_stat.py'
Jan 31 08:05:04 compute-0 sudo[122475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:04 compute-0 python3.9[122477]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:04 compute-0 sudo[122475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:04 compute-0 ceph-mon[75294]: 9.18 scrub starts
Jan 31 08:05:04 compute-0 ceph-mon[75294]: 9.18 scrub ok
Jan 31 08:05:04 compute-0 sudo[122553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqlburwngnibwidoqxhkxrkcpmzchyhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846703.8198578-335-129979338426903/AnsiballZ_file.py'
Jan 31 08:05:04 compute-0 sudo[122553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:04 compute-0 python3.9[122555]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:04 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 08:05:04 compute-0 sudo[122553]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:04 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 08:05:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:05 compute-0 sudo[122705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsoqbeieqosascxubuxuspfqkewfdncs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846704.786243-347-129497137559262/AnsiballZ_stat.py'
Jan 31 08:05:05 compute-0 sudo[122705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:05 compute-0 python3.9[122707]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:05 compute-0 sudo[122705]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:05 compute-0 ceph-mon[75294]: 9.e scrub starts
Jan 31 08:05:05 compute-0 ceph-mon[75294]: 9.e scrub ok
Jan 31 08:05:05 compute-0 ceph-mon[75294]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:05 compute-0 sudo[122783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xskwrljcvuseqmlurzajialzjluoavry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846704.786243-347-129497137559262/AnsiballZ_file.py'
Jan 31 08:05:05 compute-0 sudo[122783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:05 compute-0 python3.9[122785]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:05 compute-0 sudo[122783]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:06 compute-0 sudo[122935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajtoiguenycyrhjckjysnalcccqwykqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846705.8798027-360-27954913459475/AnsiballZ_command.py'
Jan 31 08:05:06 compute-0 sudo[122935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:05:06 compute-0 python3.9[122937]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:06 compute-0 sudo[122935]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:06 compute-0 sudo[123090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfthfgxpbfqpghphefapcnzigbzrdgrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846706.5093617-368-41087772973257/AnsiballZ_blockinfile.py'
Jan 31 08:05:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:06 compute-0 sudo[123090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:07 compute-0 python3.9[123092]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:07 compute-0 sudo[123090]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:07 compute-0 sudo[123242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhuvotuyndzvumqnsftqrustrtfflvmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846707.261373-377-53452805321162/AnsiballZ_file.py'
Jan 31 08:05:07 compute-0 sudo[123242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:07 compute-0 python3.9[123244]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:07 compute-0 sudo[123242]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:08 compute-0 ceph-mon[75294]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:08 compute-0 sudo[123394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-felimrdxfijxgclaezuzzwbtsjfvbwlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846707.792761-377-40256969865537/AnsiballZ_file.py'
Jan 31 08:05:08 compute-0 sudo[123394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:08 compute-0 python3.9[123396]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:08 compute-0 sudo[123394]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:08 compute-0 sudo[123546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxqqqfohghegzqmkcqtcaemwuqcnilnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846708.394575-392-102217174453668/AnsiballZ_mount.py'
Jan 31 08:05:08 compute-0 sudo[123546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:09 compute-0 python3.9[123548]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 08:05:09 compute-0 sudo[123546]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:09 compute-0 sudo[123698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutvhotfxeqcuomkrdgdtarxlvwovpbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846709.1619177-392-161624569322928/AnsiballZ_mount.py'
Jan 31 08:05:09 compute-0 sudo[123698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:09 compute-0 python3.9[123700]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 08:05:09 compute-0 sudo[123698]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:09 compute-0 sshd-session[115923]: Connection closed by 192.168.122.30 port 36258
Jan 31 08:05:09 compute-0 sshd-session[115920]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:05:09 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 08:05:09 compute-0 systemd[1]: session-40.scope: Consumed 24.410s CPU time.
Jan 31 08:05:09 compute-0 systemd-logind[810]: Session 40 logged out. Waiting for processes to exit.
Jan 31 08:05:09 compute-0 systemd-logind[810]: Removed session 40.
Jan 31 08:05:10 compute-0 ceph-mon[75294]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:12 compute-0 ceph-mon[75294]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:12 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 31 08:05:12 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 31 08:05:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:13 compute-0 ceph-mon[75294]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:14 compute-0 ceph-mon[75294]: 9.13 scrub starts
Jan 31 08:05:14 compute-0 ceph-mon[75294]: 9.13 scrub ok
Jan 31 08:05:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:15 compute-0 ceph-mon[75294]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:15 compute-0 sshd-session[123726]: Accepted publickey for zuul from 192.168.122.30 port 36880 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:05:15 compute-0 systemd-logind[810]: New session 41 of user zuul.
Jan 31 08:05:15 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 31 08:05:15 compute-0 sshd-session[123726]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:05:15 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 08:05:15 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 08:05:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:16 compute-0 sudo[123879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxkayjrihwqlikkptlkxejuekhqxzrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846715.668162-16-206453182378204/AnsiballZ_tempfile.py'
Jan 31 08:05:16 compute-0 sudo[123879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:16 compute-0 python3.9[123881]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 08:05:16 compute-0 sudo[123879]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:16 compute-0 ceph-mon[75294]: 9.19 scrub starts
Jan 31 08:05:16 compute-0 ceph-mon[75294]: 9.19 scrub ok
Jan 31 08:05:16 compute-0 sudo[124031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvruravefjykcfjvwwlzywfmgbuplrgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846716.3832846-28-62567091069746/AnsiballZ_stat.py'
Jan 31 08:05:16 compute-0 sudo[124031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:16 compute-0 python3.9[124033]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:05:16 compute-0 sudo[124031]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:17 compute-0 ceph-mon[75294]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:17 compute-0 sudo[124185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfjjtchfxnmlddywsosxnqdareczrwsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846717.0988946-36-187015276886501/AnsiballZ_slurp.py'
Jan 31 08:05:17 compute-0 sudo[124185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:17 compute-0 python3.9[124187]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 08:05:17 compute-0 sudo[124185]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:17 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 08:05:17 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 08:05:18 compute-0 sudo[124337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skltkljjkecbjixhdbglvvmzyiyjnckj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846717.7940881-44-67318557856756/AnsiballZ_stat.py'
Jan 31 08:05:18 compute-0 sudo[124337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:18 compute-0 python3.9[124339]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.y6p962rb follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:18 compute-0 sudo[124337]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:18 compute-0 ceph-mon[75294]: 9.6 scrub starts
Jan 31 08:05:18 compute-0 ceph-mon[75294]: 9.6 scrub ok
Jan 31 08:05:18 compute-0 sudo[124462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leeqrgxoazbzpsfhhplvvdbfcskqascy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846717.7940881-44-67318557856756/AnsiballZ_copy.py'
Jan 31 08:05:18 compute-0 sudo[124462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:18 compute-0 python3.9[124464]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.y6p962rb mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846717.7940881-44-67318557856756/.source.y6p962rb _original_basename=.7sp14uaf follow=False checksum=7940c9350162a13c4a0938802b68b3fe05f21dda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:18 compute-0 sudo[124462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:19 compute-0 ceph-mon[75294]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:19 compute-0 sudo[124614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujqbqdssdifhvrrduhwugghbctsjyhen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846718.995681-59-272877137708756/AnsiballZ_setup.py'
Jan 31 08:05:19 compute-0 sudo[124614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:19 compute-0 python3.9[124616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:05:20 compute-0 sudo[124614]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:20 compute-0 sudo[124766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlcjpqvvxuwjsyqkemueiuyzridcktvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846720.254455-68-103494431694328/AnsiballZ_blockinfile.py'
Jan 31 08:05:20 compute-0 sudo[124766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:20 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 31 08:05:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:20 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 31 08:05:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:21 compute-0 python3.9[124768]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSi7g2ictuN272qiLsoojfDgx9lVVeboCWir6rHMCvPas/6btnjBiRTJVqkKZZ4eYOzP+Weh/EzuT+JxHSkyL/+Ny46rPtucKgaliFZHkmYaXkqXDO2hgUREKT1GuGQzwsjZJ1vHputMWP5ScgRg8J5Fb7dOqFetCw+XKlYgSQEES479PDCn07JxC31a98csniIau6S9yA9XKG+kVD+Nh4mnhcFE10YkGvVhoSIZMPwKKaBQUUzJLRIbp7316V+klNshXsetD99gfhEdoWDdH/1ew4fStSfYMA7SX12zAIZhr++IDXVfWwMvf9bF24wE5nbmpAB3ro7wS+zw8BdWd7dNZXCVyjQcGNA08B0H8pO5anFxBjj5yHx/tMOsluEXf04mIitZyxRxeiizNAXRiskLQQTYpSEgQ6JcbyoCc+9WkV/6rIsaxIefHqJty7/8m5wH0FAV4pXkiySzNGqYibMmGqXYp0L7Z5/pYCyeNpsMQZsEFfJwr8C4SvpNV5fBk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKUyx/kEdFEReRk/h5tefV1FGVtIeEqlJ58UerPMBWbi
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPZxv8WmwcRYDh1TZXwppWAC6GeAYeABCRxKbXZ28nbPzHV8jfXeqxH3V0Cwj8EIISR/dBVdlUDrj3cyaqb+iZk=
                                              create=True mode=0644 path=/tmp/ansible.y6p962rb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:21 compute-0 sudo[124766]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 sudo[124918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjxkpkzcozusyqompfewiaemgljgvjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846721.182331-76-40808545363713/AnsiballZ_command.py'
Jan 31 08:05:21 compute-0 sudo[124918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:21 compute-0 python3.9[124920]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.y6p962rb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:21 compute-0 sudo[124918]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 ceph-mon[75294]: 9.7 scrub starts
Jan 31 08:05:21 compute-0 ceph-mon[75294]: 9.7 scrub ok
Jan 31 08:05:21 compute-0 ceph-mon[75294]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:22 compute-0 sudo[125072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjkfmuxrxoiupxdlnkgkitquykkuvrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846721.9956772-84-126088114718055/AnsiballZ_file.py'
Jan 31 08:05:22 compute-0 sudo[125072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:22 compute-0 python3.9[125074]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.y6p962rb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:22 compute-0 sudo[125072]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:22 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 08:05:22 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 08:05:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:23 compute-0 sshd-session[123729]: Connection closed by 192.168.122.30 port 36880
Jan 31 08:05:23 compute-0 sshd-session[123726]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:05:23 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 08:05:23 compute-0 systemd[1]: session-41.scope: Consumed 4.387s CPU time.
Jan 31 08:05:23 compute-0 systemd-logind[810]: Session 41 logged out. Waiting for processes to exit.
Jan 31 08:05:23 compute-0 systemd-logind[810]: Removed session 41.
Jan 31 08:05:23 compute-0 ceph-mon[75294]: 9.c scrub starts
Jan 31 08:05:23 compute-0 ceph-mon[75294]: 9.c scrub ok
Jan 31 08:05:23 compute-0 ceph-mon[75294]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:25 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 08:05:25 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 08:05:25 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 08:05:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:26 compute-0 ceph-mon[75294]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:27 compute-0 ceph-mon[75294]: 9.f scrub starts
Jan 31 08:05:27 compute-0 ceph-mon[75294]: 9.f scrub ok
Jan 31 08:05:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 31 08:05:27 compute-0 ceph-osd[88061]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 31 08:05:28 compute-0 ceph-mon[75294]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:29 compute-0 sshd-session[125101]: Accepted publickey for zuul from 192.168.122.30 port 45146 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:05:29 compute-0 systemd-logind[810]: New session 42 of user zuul.
Jan 31 08:05:29 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 31 08:05:29 compute-0 sshd-session[125101]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:05:29 compute-0 ceph-mon[75294]: 9.17 scrub starts
Jan 31 08:05:29 compute-0 ceph-mon[75294]: 9.17 scrub ok
Jan 31 08:05:30 compute-0 python3.9[125254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:05:30 compute-0 ceph-mon[75294]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:31 compute-0 sudo[125408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfzehwwaljcywxleyfpeodtkdlonbebg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846730.6274023-27-83291504571809/AnsiballZ_systemd.py'
Jan 31 08:05:31 compute-0 sudo[125408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:31 compute-0 ceph-mon[75294]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:31 compute-0 python3.9[125410]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 08:05:31 compute-0 sudo[125408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:32 compute-0 sudo[125562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vphpoxciuttqlizimfrlbrgaddjluaos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846731.812469-35-122170012484027/AnsiballZ_systemd.py'
Jan 31 08:05:32 compute-0 sudo[125562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:32 compute-0 python3.9[125564]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:05:32 compute-0 sudo[125562]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:32 compute-0 sudo[125715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oueisogpzkpagwvopmytjsdrunctywbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846732.59369-44-266003063452324/AnsiballZ_command.py'
Jan 31 08:05:32 compute-0 sudo[125715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:33 compute-0 python3.9[125717]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:33 compute-0 sudo[125715]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:33 compute-0 sudo[125868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxjfycofjkhtqeqxosuymnzsxomvbxsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846733.3117497-52-32620446521647/AnsiballZ_stat.py'
Jan 31 08:05:33 compute-0 sudo[125868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:33 compute-0 python3.9[125870]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:05:33 compute-0 sudo[125868]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:33 compute-0 ceph-mon[75294]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:34 compute-0 sudo[126020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmbztxqsqcopkuyboghxmtcvtpkbidgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846734.0451157-61-181803910871000/AnsiballZ_file.py'
Jan 31 08:05:34 compute-0 sudo[126020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:34 compute-0 python3.9[126022]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:34 compute-0 sudo[126020]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:34 compute-0 sshd-session[125104]: Connection closed by 192.168.122.30 port 45146
Jan 31 08:05:34 compute-0 sshd-session[125101]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:05:34 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 08:05:34 compute-0 systemd[1]: session-42.scope: Consumed 3.599s CPU time.
Jan 31 08:05:34 compute-0 systemd-logind[810]: Session 42 logged out. Waiting for processes to exit.
Jan 31 08:05:34 compute-0 systemd-logind[810]: Removed session 42.
Jan 31 08:05:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:36 compute-0 ceph-mon[75294]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:37 compute-0 ceph-mon[75294]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:39 compute-0 sudo[126048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:39 compute-0 ceph-mon[75294]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:39 compute-0 sudo[126048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:39 compute-0 sudo[126048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:40 compute-0 sudo[126073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:05:40 compute-0 sudo[126073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:40 compute-0 sudo[126073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:05:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:05:40 compute-0 sudo[126129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:40 compute-0 sudo[126129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:40 compute-0 sudo[126129]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:40 compute-0 sudo[126156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:05:40 compute-0 sudo[126156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:40 compute-0 sshd-session[126146]: Accepted publickey for zuul from 192.168.122.30 port 49450 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:05:40 compute-0 systemd-logind[810]: New session 43 of user zuul.
Jan 31 08:05:40 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 31 08:05:40 compute-0 sshd-session[126146]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.893846) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846740893874, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1580, "num_deletes": 250, "total_data_size": 1979510, "memory_usage": 2019000, "flush_reason": "Manual Compaction"}
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 08:05:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846740906395, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1199510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7465, "largest_seqno": 9044, "table_properties": {"data_size": 1194288, "index_size": 2169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16692, "raw_average_key_size": 21, "raw_value_size": 1181377, "raw_average_value_size": 1503, "num_data_blocks": 100, "num_entries": 786, "num_filter_entries": 786, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846615, "oldest_key_time": 1769846615, "file_creation_time": 1769846740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 12618 microseconds, and 3059 cpu microseconds.
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.906456) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1199510 bytes OK
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.906481) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.912095) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.912130) EVENT_LOG_v1 {"time_micros": 1769846740912120, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.912158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1972280, prev total WAL file size 1972280, number of live WAL files 2.
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:05:40 compute-0 podman[126247]: 2026-01-31 08:05:40.911227906 +0000 UTC m=+0.051358430 container create a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.912893) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1171KB)], [20(7582KB)]
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846740913003, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8964392, "oldest_snapshot_seqno": -1}
Jan 31 08:05:40 compute-0 systemd[1]: Started libpod-conmon-a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3.scope.
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3418 keys, 7185856 bytes, temperature: kUnknown
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846740975764, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7185856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7159227, "index_size": 16983, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81855, "raw_average_key_size": 23, "raw_value_size": 7093718, "raw_average_value_size": 2075, "num_data_blocks": 747, "num_entries": 3418, "num_filter_entries": 3418, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769846740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:05:40 compute-0 podman[126247]: 2026-01-31 08:05:40.882863185 +0000 UTC m=+0.022993739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:05:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:05:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.976108) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7185856 bytes
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.980402) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.6 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.4 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(13.5) write-amplify(6.0) OK, records in: 3874, records dropped: 456 output_compression: NoCompression
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.980472) EVENT_LOG_v1 {"time_micros": 1769846740980444, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62862, "compaction_time_cpu_micros": 21034, "output_level": 6, "num_output_files": 1, "total_output_size": 7185856, "num_input_records": 3874, "num_output_records": 3418, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846740980998, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846740982155, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.912765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.982190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.982196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.982197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.982199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:05:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:05:40.982201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:05:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:41 compute-0 podman[126247]: 2026-01-31 08:05:41.001373874 +0000 UTC m=+0.141504428 container init a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:05:41 compute-0 podman[126247]: 2026-01-31 08:05:41.007159187 +0000 UTC m=+0.147289731 container start a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:05:41 compute-0 practical_hellman[126264]: 167 167
Jan 31 08:05:41 compute-0 systemd[1]: libpod-a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3.scope: Deactivated successfully.
Jan 31 08:05:41 compute-0 podman[126247]: 2026-01-31 08:05:41.015897728 +0000 UTC m=+0.156028292 container attach a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:05:41 compute-0 podman[126247]: 2026-01-31 08:05:41.016405492 +0000 UTC m=+0.156536046 container died a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3574ca6f1380e942409e920c58ca028eb81fcab9aa15711d57024bf63837703-merged.mount: Deactivated successfully.
Jan 31 08:05:41 compute-0 podman[126247]: 2026-01-31 08:05:41.071925012 +0000 UTC m=+0.212055536 container remove a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:05:41 compute-0 systemd[1]: libpod-conmon-a37675eaf7e01e7c154a16dc868652ea55fba4227d2bb82b1f6445a5767e9fe3.scope: Deactivated successfully.
Jan 31 08:05:41 compute-0 podman[126299]: 2026-01-31 08:05:41.181052271 +0000 UTC m=+0.038059218 container create 141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:41 compute-0 systemd[1]: Started libpod-conmon-141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83.scope.
Jan 31 08:05:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e5f81810c2f2810f6c8072511fb17ae726e662f3067916eff2ef35e9cd0b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e5f81810c2f2810f6c8072511fb17ae726e662f3067916eff2ef35e9cd0b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e5f81810c2f2810f6c8072511fb17ae726e662f3067916eff2ef35e9cd0b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e5f81810c2f2810f6c8072511fb17ae726e662f3067916eff2ef35e9cd0b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e5f81810c2f2810f6c8072511fb17ae726e662f3067916eff2ef35e9cd0b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:41 compute-0 podman[126299]: 2026-01-31 08:05:41.16287257 +0000 UTC m=+0.019879537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:05:41 compute-0 podman[126299]: 2026-01-31 08:05:41.259588431 +0000 UTC m=+0.116595398 container init 141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:41 compute-0 podman[126299]: 2026-01-31 08:05:41.266393141 +0000 UTC m=+0.123400088 container start 141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:05:41 compute-0 podman[126299]: 2026-01-31 08:05:41.27313409 +0000 UTC m=+0.130141057 container attach 141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:41 compute-0 python3.9[126404]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:05:41 compute-0 infallible_beaver[126346]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:05:41 compute-0 infallible_beaver[126346]: --> All data devices are unavailable
Jan 31 08:05:41 compute-0 systemd[1]: libpod-141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83.scope: Deactivated successfully.
Jan 31 08:05:41 compute-0 podman[126299]: 2026-01-31 08:05:41.704679927 +0000 UTC m=+0.561686884 container died 141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c3e5f81810c2f2810f6c8072511fb17ae726e662f3067916eff2ef35e9cd0b2-merged.mount: Deactivated successfully.
Jan 31 08:05:41 compute-0 podman[126299]: 2026-01-31 08:05:41.882559667 +0000 UTC m=+0.739566614 container remove 141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:05:41 compute-0 systemd[1]: libpod-conmon-141f11407d6cd197dafc59b709b22ac596c075d1eacb63ef04ad5eea20519b83.scope: Deactivated successfully.
Jan 31 08:05:41 compute-0 sudo[126156]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:41 compute-0 sudo[126461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:41 compute-0 sudo[126461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:41 compute-0 sudo[126461]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:42 compute-0 sudo[126486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:05:42 compute-0 sudo[126486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:42 compute-0 ceph-mon[75294]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:42 compute-0 sudo[126651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcflbnudtrbeoydhfcsqylanyfenymrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846742.0338671-29-91374021644017/AnsiballZ_setup.py'
Jan 31 08:05:42 compute-0 sudo[126651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:42 compute-0 podman[126649]: 2026-01-31 08:05:42.372237024 +0000 UTC m=+0.082105475 container create 37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:05:42 compute-0 podman[126649]: 2026-01-31 08:05:42.326255926 +0000 UTC m=+0.036124397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:05:42 compute-0 systemd[1]: Started libpod-conmon-37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58.scope.
Jan 31 08:05:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:42 compute-0 podman[126649]: 2026-01-31 08:05:42.485527874 +0000 UTC m=+0.195396335 container init 37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:05:42 compute-0 podman[126649]: 2026-01-31 08:05:42.493119935 +0000 UTC m=+0.202988366 container start 37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:05:42 compute-0 goofy_davinci[126668]: 167 167
Jan 31 08:05:42 compute-0 systemd[1]: libpod-37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58.scope: Deactivated successfully.
Jan 31 08:05:42 compute-0 podman[126649]: 2026-01-31 08:05:42.507209508 +0000 UTC m=+0.217077969 container attach 37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_davinci, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:05:42 compute-0 podman[126649]: 2026-01-31 08:05:42.507695501 +0000 UTC m=+0.217563942 container died 37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0194d818d87272ebd866cf32f392cdc30437a9f31d58f25f5ba4ba366738eaa3-merged.mount: Deactivated successfully.
Jan 31 08:05:42 compute-0 podman[126649]: 2026-01-31 08:05:42.611760647 +0000 UTC m=+0.321629078 container remove 37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:05:42 compute-0 systemd[1]: libpod-conmon-37cfbcfea0175d58ee1de86ef467dd951e668e21e7dee67a569c5de2eb8a2c58.scope: Deactivated successfully.
Jan 31 08:05:42 compute-0 python3.9[126658]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:05:42 compute-0 podman[126702]: 2026-01-31 08:05:42.81580683 +0000 UTC m=+0.119894886 container create d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:42 compute-0 podman[126702]: 2026-01-31 08:05:42.723640289 +0000 UTC m=+0.027728445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:05:42 compute-0 systemd[1]: Started libpod-conmon-d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23.scope.
Jan 31 08:05:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2285d0805907457bafbe1f176658578b3f551d54d8a3112837cd366d9aa64850/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2285d0805907457bafbe1f176658578b3f551d54d8a3112837cd366d9aa64850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2285d0805907457bafbe1f176658578b3f551d54d8a3112837cd366d9aa64850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2285d0805907457bafbe1f176658578b3f551d54d8a3112837cd366d9aa64850/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:42 compute-0 sudo[126651]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:42 compute-0 podman[126702]: 2026-01-31 08:05:42.954714668 +0000 UTC m=+0.258802734 container init d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_faraday, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:05:42 compute-0 podman[126702]: 2026-01-31 08:05:42.962503775 +0000 UTC m=+0.266591821 container start d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_faraday, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:05:42 compute-0 podman[126702]: 2026-01-31 08:05:42.979098373 +0000 UTC m=+0.283186449 container attach d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:05:43 compute-0 stoic_faraday[126718]: {
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:     "0": [
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:         {
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "devices": [
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "/dev/loop3"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             ],
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_name": "ceph_lv0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_size": "21470642176",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "name": "ceph_lv0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "tags": {
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cluster_name": "ceph",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.crush_device_class": "",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.encrypted": "0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.objectstore": "bluestore",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osd_id": "0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.type": "block",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.vdo": "0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.with_tpm": "0"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             },
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "type": "block",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "vg_name": "ceph_vg0"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:         }
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:     ],
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:     "1": [
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:         {
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "devices": [
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "/dev/loop4"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             ],
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_name": "ceph_lv1",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_size": "21470642176",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "name": "ceph_lv1",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "tags": {
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cluster_name": "ceph",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.crush_device_class": "",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.encrypted": "0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.objectstore": "bluestore",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osd_id": "1",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.type": "block",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.vdo": "0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.with_tpm": "0"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             },
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "type": "block",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "vg_name": "ceph_vg1"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:         }
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:     ],
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:     "2": [
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:         {
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "devices": [
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "/dev/loop5"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             ],
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_name": "ceph_lv2",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_size": "21470642176",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "name": "ceph_lv2",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "tags": {
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.cluster_name": "ceph",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.crush_device_class": "",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.encrypted": "0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.objectstore": "bluestore",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osd_id": "2",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.type": "block",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.vdo": "0",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:                 "ceph.with_tpm": "0"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             },
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "type": "block",
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:             "vg_name": "ceph_vg2"
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:         }
Jan 31 08:05:43 compute-0 stoic_faraday[126718]:     ]
Jan 31 08:05:43 compute-0 stoic_faraday[126718]: }
Jan 31 08:05:43 compute-0 systemd[1]: libpod-d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23.scope: Deactivated successfully.
Jan 31 08:05:43 compute-0 podman[126702]: 2026-01-31 08:05:43.260564787 +0000 UTC m=+0.564652853 container died d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:05:43 compute-0 sudo[126806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbjssbvzmwibzmtauxubznnozntpayg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846742.0338671-29-91374021644017/AnsiballZ_dnf.py'
Jan 31 08:05:43 compute-0 sudo[126806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2285d0805907457bafbe1f176658578b3f551d54d8a3112837cd366d9aa64850-merged.mount: Deactivated successfully.
Jan 31 08:05:43 compute-0 podman[126702]: 2026-01-31 08:05:43.372136951 +0000 UTC m=+0.676225027 container remove d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_faraday, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:05:43 compute-0 systemd[1]: libpod-conmon-d63520645da33a0e7d9c6e204796d2c19a1361902c7890514609003a31e13b23.scope: Deactivated successfully.
Jan 31 08:05:43 compute-0 sudo[126486]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:43 compute-0 sudo[126815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:43 compute-0 sudo[126815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:43 compute-0 sudo[126815]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:43 compute-0 sudo[126840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:05:43 compute-0 sudo[126840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:43 compute-0 python3.9[126812]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 08:05:43 compute-0 podman[126879]: 2026-01-31 08:05:43.806723879 +0000 UTC m=+0.050957950 container create 93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:05:43 compute-0 systemd[1]: Started libpod-conmon-93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa.scope.
Jan 31 08:05:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:43 compute-0 podman[126879]: 2026-01-31 08:05:43.875621944 +0000 UTC m=+0.119856035 container init 93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:05:43 compute-0 podman[126879]: 2026-01-31 08:05:43.881050168 +0000 UTC m=+0.125284239 container start 93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:05:43 compute-0 podman[126879]: 2026-01-31 08:05:43.786557105 +0000 UTC m=+0.030791196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:05:43 compute-0 dazzling_wu[126897]: 167 167
Jan 31 08:05:43 compute-0 systemd[1]: libpod-93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa.scope: Deactivated successfully.
Jan 31 08:05:43 compute-0 conmon[126897]: conmon 93774dd45edd4bb2157b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa.scope/container/memory.events
Jan 31 08:05:43 compute-0 podman[126879]: 2026-01-31 08:05:43.886990915 +0000 UTC m=+0.131225016 container attach 93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:05:43 compute-0 podman[126879]: 2026-01-31 08:05:43.888067854 +0000 UTC m=+0.132301925 container died 93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f861220f412c7e79dcd02d0d06747d02e01c101cd4cc35a951b7d98e25c48c4f-merged.mount: Deactivated successfully.
Jan 31 08:05:43 compute-0 podman[126879]: 2026-01-31 08:05:43.934706878 +0000 UTC m=+0.178940949 container remove 93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:05:43 compute-0 systemd[1]: libpod-conmon-93774dd45edd4bb2157ba491677c301530b2ace7d0a7bfbfbabc0fdd8c19acaa.scope: Deactivated successfully.
Jan 31 08:05:44 compute-0 ceph-mon[75294]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:44 compute-0 podman[126919]: 2026-01-31 08:05:44.067124854 +0000 UTC m=+0.047218650 container create ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:05:44 compute-0 systemd[1]: Started libpod-conmon-ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29.scope.
Jan 31 08:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a77cc8191fdb3fec21bbb92bc5b227f0cb4b5b4efc76067dd6187f4cf0f488/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a77cc8191fdb3fec21bbb92bc5b227f0cb4b5b4efc76067dd6187f4cf0f488/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a77cc8191fdb3fec21bbb92bc5b227f0cb4b5b4efc76067dd6187f4cf0f488/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a77cc8191fdb3fec21bbb92bc5b227f0cb4b5b4efc76067dd6187f4cf0f488/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:44 compute-0 podman[126919]: 2026-01-31 08:05:44.042867022 +0000 UTC m=+0.022960838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:05:44 compute-0 podman[126919]: 2026-01-31 08:05:44.155863585 +0000 UTC m=+0.135957461 container init ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:05:44 compute-0 podman[126919]: 2026-01-31 08:05:44.160508947 +0000 UTC m=+0.140602743 container start ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:05:44 compute-0 podman[126919]: 2026-01-31 08:05:44.16777986 +0000 UTC m=+0.147873716 container attach ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:05:44 compute-0 sshd-session[126893]: Invalid user validator from 193.32.162.145 port 58882
Jan 31 08:05:44 compute-0 sshd-session[126893]: Connection closed by invalid user validator 193.32.162.145 port 58882 [preauth]
Jan 31 08:05:44 compute-0 sudo[126806]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:44 compute-0 lvm[127039]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:05:44 compute-0 lvm[127038]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:05:44 compute-0 lvm[127039]: VG ceph_vg1 finished
Jan 31 08:05:44 compute-0 lvm[127038]: VG ceph_vg0 finished
Jan 31 08:05:44 compute-0 lvm[127041]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:05:44 compute-0 lvm[127041]: VG ceph_vg2 finished
Jan 31 08:05:44 compute-0 bold_payne[126936]: {}
Jan 31 08:05:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:44 compute-0 systemd[1]: libpod-ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29.scope: Deactivated successfully.
Jan 31 08:05:44 compute-0 systemd[1]: libpod-ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29.scope: Consumed 1.084s CPU time.
Jan 31 08:05:44 compute-0 podman[126919]: 2026-01-31 08:05:44.920723928 +0000 UTC m=+0.900817734 container died ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-34a77cc8191fdb3fec21bbb92bc5b227f0cb4b5b4efc76067dd6187f4cf0f488-merged.mount: Deactivated successfully.
Jan 31 08:05:44 compute-0 podman[126919]: 2026-01-31 08:05:44.979506684 +0000 UTC m=+0.959600490 container remove ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:05:44 compute-0 systemd[1]: libpod-conmon-ce82d7c2efe619b05965b118c628f708bdb71692d591b4f14ace92411c9cef29.scope: Deactivated successfully.
Jan 31 08:05:45 compute-0 sudo[126840]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:05:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:05:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:05:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:05:45 compute-0 sudo[127111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:05:45 compute-0 sudo[127111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:45 compute-0 sudo[127111]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:45 compute-0 python3.9[127209]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:46 compute-0 ceph-mon[75294]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:05:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:05:46 compute-0 python3.9[127360]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:05:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:47 compute-0 python3.9[127510]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:05:48 compute-0 python3.9[127660]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:05:48 compute-0 ceph-mon[75294]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:48 compute-0 sshd-session[126182]: Connection closed by 192.168.122.30 port 49450
Jan 31 08:05:48 compute-0 sshd-session[126146]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:05:48 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 08:05:48 compute-0 systemd[1]: session-43.scope: Consumed 5.389s CPU time.
Jan 31 08:05:48 compute-0 systemd-logind[810]: Session 43 logged out. Waiting for processes to exit.
Jan 31 08:05:48 compute-0 systemd-logind[810]: Removed session 43.
Jan 31 08:05:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:49 compute-0 ceph-mon[75294]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:05:50
Jan 31 08:05:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:05:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:05:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'vms', 'default.rgw.meta']
Jan 31 08:05:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:05:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:51 compute-0 ceph-mon[75294]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:53 compute-0 sshd-session[127685]: Accepted publickey for zuul from 192.168.122.30 port 44532 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:05:53 compute-0 systemd-logind[810]: New session 44 of user zuul.
Jan 31 08:05:53 compute-0 ceph-mon[75294]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:54 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 31 08:05:54 compute-0 sshd-session[127685]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:05:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:54 compute-0 python3.9[127838]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:05:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:05:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:05:56 compute-0 ceph-mon[75294]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:56 compute-0 sudo[127992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpgfvpnshhjrbntkvuqjkbfwfodthxqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846756.026089-45-108472873709333/AnsiballZ_file.py'
Jan 31 08:05:56 compute-0 sudo[127992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:56 compute-0 python3.9[127994]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:05:56 compute-0 sudo[127992]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:56 compute-0 sudo[128144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egklwayxbweaazfutpszklakhfcuvodj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846756.7246559-45-27384237592564/AnsiballZ_file.py'
Jan 31 08:05:56 compute-0 sudo[128144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:57 compute-0 python3.9[128146]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:05:57 compute-0 sudo[128144]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:57 compute-0 sudo[128296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syudcvvbmtzrixzqiacrnqcmdmtxsaof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846757.3617694-60-98800335930139/AnsiballZ_stat.py'
Jan 31 08:05:57 compute-0 sudo[128296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:57 compute-0 python3.9[128298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:57 compute-0 sudo[128296]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:58 compute-0 ceph-mon[75294]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:58 compute-0 sudo[128419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyahcybvxupfkxhmngenxchmgnuamyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846757.3617694-60-98800335930139/AnsiballZ_copy.py'
Jan 31 08:05:58 compute-0 sudo[128419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:58 compute-0 python3.9[128421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846757.3617694-60-98800335930139/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8a4aa32bd9a6578c05e4597467d176e067abc7c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:58 compute-0 sudo[128419]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:05:59 compute-0 sudo[128571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyuwlxwrzzgvllrwtxvuyjmnkphrqpdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846758.7568412-60-248316207118061/AnsiballZ_stat.py'
Jan 31 08:05:59 compute-0 sudo[128571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:59 compute-0 python3.9[128573]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:05:59 compute-0 sudo[128571]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:59 compute-0 sudo[128694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acbxojivulxlulcedkgxhgbslnxxsrob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846758.7568412-60-248316207118061/AnsiballZ_copy.py'
Jan 31 08:05:59 compute-0 sudo[128694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:59 compute-0 python3.9[128696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846758.7568412-60-248316207118061/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=daec0d621282fb405f82369b0e43d13b4f800b6c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:59 compute-0 sudo[128694]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:00 compute-0 ceph-mon[75294]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:00 compute-0 sudo[128846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkruncsagobmrbetwdzgopaawtpjeqnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846759.7999635-60-223137025773638/AnsiballZ_stat.py'
Jan 31 08:06:00 compute-0 sudo[128846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:00 compute-0 python3.9[128848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:00 compute-0 sudo[128846]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:00 compute-0 sudo[128969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtthpushuywaaetkbgrqiqmzmhiahrrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846759.7999635-60-223137025773638/AnsiballZ_copy.py'
Jan 31 08:06:00 compute-0 sudo[128969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:00 compute-0 python3.9[128971]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846759.7999635-60-223137025773638/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6bd72787681b52dc76c33f3d97e1cfce8f0ccca2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:00 compute-0 sudo[128969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:01 compute-0 sudo[129121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhfbxsfpitvrqgaasqxufxaqxonfczlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846761.0776474-104-58542947966616/AnsiballZ_file.py'
Jan 31 08:06:01 compute-0 sudo[129121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:01 compute-0 python3.9[129123]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:01 compute-0 sudo[129121]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:01 compute-0 sshd-session[71452]: Received disconnect from 38.102.83.129 port 58200:11: disconnected by user
Jan 31 08:06:01 compute-0 sshd-session[71452]: Disconnected from user zuul 38.102.83.129 port 58200
Jan 31 08:06:01 compute-0 sshd-session[71449]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:06:01 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 08:06:01 compute-0 systemd[1]: session-18.scope: Consumed 1min 28.695s CPU time.
Jan 31 08:06:01 compute-0 systemd-logind[810]: Session 18 logged out. Waiting for processes to exit.
Jan 31 08:06:01 compute-0 systemd-logind[810]: Removed session 18.
Jan 31 08:06:01 compute-0 sudo[129273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbeeolbmpyrsfnifuapchfmdhyusdtuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846761.6654902-104-233158368807016/AnsiballZ_file.py'
Jan 31 08:06:01 compute-0 sudo[129273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:02 compute-0 ceph-mon[75294]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:02 compute-0 python3.9[129275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:02 compute-0 sudo[129273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:02 compute-0 sudo[129425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocwuopurbpuggivdtxtawpdlemqluxiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846762.259586-119-139396375546205/AnsiballZ_stat.py'
Jan 31 08:06:02 compute-0 sudo[129425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:02 compute-0 python3.9[129427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:02 compute-0 sudo[129425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:02 compute-0 sudo[129548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yretisjponcliqktuiywrbhwicwpaxof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846762.259586-119-139396375546205/AnsiballZ_copy.py'
Jan 31 08:06:02 compute-0 sudo[129548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:03 compute-0 python3.9[129550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846762.259586-119-139396375546205/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3a953bc662a01a09af56d4c79d71ec6c213f41be backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:03 compute-0 sudo[129548]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:03 compute-0 sudo[129700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tunlzmqloqefczfanfiqudjhgwwpmbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846763.3418524-119-233047459706713/AnsiballZ_stat.py'
Jan 31 08:06:03 compute-0 sudo[129700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:03 compute-0 python3.9[129702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:03 compute-0 sudo[129700]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:04 compute-0 ceph-mon[75294]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:04 compute-0 sudo[129823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rornyvsgqyskwpttmqodvyazypvggmol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846763.3418524-119-233047459706713/AnsiballZ_copy.py'
Jan 31 08:06:04 compute-0 sudo[129823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:04 compute-0 python3.9[129825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846763.3418524-119-233047459706713/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=19e7320cb36758df854d967de8225d02c114a52f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:04 compute-0 sudo[129823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:04 compute-0 sudo[129975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkwvoorcjiasfjzdehuxlqfvduueosnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846764.4333751-119-276787218075805/AnsiballZ_stat.py'
Jan 31 08:06:04 compute-0 sudo[129975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:04 compute-0 python3.9[129977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:04 compute-0 sudo[129975]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:05 compute-0 sudo[130098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufziwpmlyjccujzoersowqibqlwtubim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846764.4333751-119-276787218075805/AnsiballZ_copy.py'
Jan 31 08:06:05 compute-0 sudo[130098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:05 compute-0 python3.9[130100]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846764.4333751-119-276787218075805/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2b7f2e2d34092c371ac3ab85704a003a6adfe76c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:05 compute-0 sudo[130098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:05 compute-0 sudo[130250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvrdmuwsfqawsvivnbkziiobekdtffgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846765.51838-163-238353816563088/AnsiballZ_file.py'
Jan 31 08:06:05 compute-0 sudo[130250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:05 compute-0 python3.9[130252]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:05 compute-0 sudo[130250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:06 compute-0 ceph-mon[75294]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:06:06 compute-0 sudo[130402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsdpslqlhotvyetgtwwjobuqnxczmifi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846766.0522118-163-41301096277858/AnsiballZ_file.py'
Jan 31 08:06:06 compute-0 sudo[130402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:06 compute-0 python3.9[130404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:06 compute-0 sudo[130402]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:07 compute-0 sudo[130554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfstsfmlfxpkiseutwkudmcabryxdhcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846766.8962169-178-84818797943688/AnsiballZ_stat.py'
Jan 31 08:06:07 compute-0 sudo[130554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:07 compute-0 python3.9[130556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:07 compute-0 sudo[130554]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:07 compute-0 sudo[130677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhxstxqiuehrlfeivhzraxncqfygqndh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846766.8962169-178-84818797943688/AnsiballZ_copy.py'
Jan 31 08:06:07 compute-0 sudo[130677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:07 compute-0 python3.9[130679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846766.8962169-178-84818797943688/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c4d59392427d40c289805c6fc3fa8cedb0f5ee36 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:07 compute-0 sudo[130677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:08 compute-0 ceph-mon[75294]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:08 compute-0 sudo[130829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgtvpplpvtzvmunmxmcqauzbctikmejv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846767.8976452-178-2667646780212/AnsiballZ_stat.py'
Jan 31 08:06:08 compute-0 sudo[130829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:08 compute-0 python3.9[130831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:08 compute-0 sudo[130829]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:08 compute-0 sudo[130952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opgkyowbvtpactwloladanefyhmiqghw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846767.8976452-178-2667646780212/AnsiballZ_copy.py'
Jan 31 08:06:08 compute-0 sudo[130952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:08 compute-0 python3.9[130954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846767.8976452-178-2667646780212/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=19e7320cb36758df854d967de8225d02c114a52f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:08 compute-0 sudo[130952]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:09 compute-0 sudo[131104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txlestspbgbqkksyqbqnheqfpahnglau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846768.8786755-178-212120498536664/AnsiballZ_stat.py'
Jan 31 08:06:09 compute-0 sudo[131104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:09 compute-0 python3.9[131106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:09 compute-0 sudo[131104]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:09 compute-0 sudo[131227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zooeorptupptwujgvamohliehhepklsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846768.8786755-178-212120498536664/AnsiballZ_copy.py'
Jan 31 08:06:09 compute-0 sudo[131227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:10 compute-0 python3.9[131229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846768.8786755-178-212120498536664/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=5aef295156ea67794f1054a597605ef2194ac806 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:10 compute-0 sudo[131227]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:10 compute-0 ceph-mon[75294]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:10 compute-0 sudo[131379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opdqmlcqsyguaaxtucpapfrarqtlirrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846770.7177567-238-229641260082464/AnsiballZ_file.py'
Jan 31 08:06:10 compute-0 sudo[131379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:11 compute-0 ceph-mon[75294]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:11 compute-0 python3.9[131381]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:11 compute-0 sudo[131379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:11 compute-0 sudo[131531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwalxpqubuegmcxozhwwqyenidrlmpbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846771.2726526-246-233731716905289/AnsiballZ_stat.py'
Jan 31 08:06:11 compute-0 sudo[131531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:11 compute-0 python3.9[131533]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:11 compute-0 sudo[131531]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:12 compute-0 sudo[131654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eelalstncdxzafztrqcesoqvnicxbsoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846771.2726526-246-233731716905289/AnsiballZ_copy.py'
Jan 31 08:06:12 compute-0 sudo[131654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:12 compute-0 python3.9[131656]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846771.2726526-246-233731716905289/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ff538dbcfa65d2a0e72b63d2920a0809a609b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:12 compute-0 sudo[131654]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:12 compute-0 sudo[131806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvhzlhduzqckshifotbygnruxxpymywj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846772.4041464-262-234959523848327/AnsiballZ_file.py'
Jan 31 08:06:12 compute-0 sudo[131806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:12 compute-0 python3.9[131808]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:12 compute-0 sudo[131806]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:13 compute-0 sudo[131958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpgspelhpdoludzokvwzdwrwzckepxkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846772.9679258-270-40843068049770/AnsiballZ_stat.py'
Jan 31 08:06:13 compute-0 sudo[131958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:13 compute-0 python3.9[131960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:13 compute-0 sudo[131958]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:13 compute-0 sudo[132082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkuqibqxkvaatrfnkvaubfrfsymkowui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846772.9679258-270-40843068049770/AnsiballZ_copy.py'
Jan 31 08:06:13 compute-0 sudo[132082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:13 compute-0 python3.9[132084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846772.9679258-270-40843068049770/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ff538dbcfa65d2a0e72b63d2920a0809a609b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:13 compute-0 sudo[132082]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:13 compute-0 ceph-mon[75294]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:14 compute-0 sudo[132234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yebyotrankeqxpkhvfjuhlklgoqfeznr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846774.088687-286-278434805959006/AnsiballZ_file.py'
Jan 31 08:06:14 compute-0 sudo[132234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:14 compute-0 python3.9[132236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:14 compute-0 sudo[132234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:14 compute-0 sudo[132386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllwdnpnidsmpwfuizqdlazketfuvmxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846774.6733572-294-273521050983459/AnsiballZ_stat.py'
Jan 31 08:06:14 compute-0 sudo[132386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:15 compute-0 python3.9[132388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:15 compute-0 sudo[132386]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:15 compute-0 sudo[132509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjkasjooinfwiprprpmrkrxluhbvkuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846774.6733572-294-273521050983459/AnsiballZ_copy.py'
Jan 31 08:06:15 compute-0 sudo[132509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:15 compute-0 python3.9[132511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846774.6733572-294-273521050983459/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ff538dbcfa65d2a0e72b63d2920a0809a609b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:15 compute-0 sudo[132509]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:15 compute-0 ceph-mon[75294]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:16 compute-0 sudo[132662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnblcmqrksnghkcslurhcwpunufhtkcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846775.7918863-310-218909937647581/AnsiballZ_file.py'
Jan 31 08:06:16 compute-0 sudo[132662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:16 compute-0 python3.9[132664]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:16 compute-0 sudo[132662]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:16 compute-0 sudo[132814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hljzaoyxkxxvjjdtenterhgqcdswzlow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846776.4077544-318-140278205308505/AnsiballZ_stat.py'
Jan 31 08:06:16 compute-0 sudo[132814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:16 compute-0 python3.9[132816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:16 compute-0 sudo[132814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:17 compute-0 sudo[132937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptswoelrbxqdlorsejnryckkntbwwoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846776.4077544-318-140278205308505/AnsiballZ_copy.py'
Jan 31 08:06:17 compute-0 sudo[132937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:17 compute-0 python3.9[132939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846776.4077544-318-140278205308505/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ff538dbcfa65d2a0e72b63d2920a0809a609b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:17 compute-0 sudo[132937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:17 compute-0 sudo[133089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruthqezuyfzrqalasyylsdldbkupdrfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846777.5884404-334-201653055470083/AnsiballZ_file.py'
Jan 31 08:06:17 compute-0 sudo[133089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:18 compute-0 ceph-mon[75294]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:18 compute-0 python3.9[133091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:18 compute-0 sudo[133089]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:18 compute-0 sudo[133241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdbqxosbfurpzuegghvhrkkygpzudnrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846778.2666686-342-71039883908679/AnsiballZ_stat.py'
Jan 31 08:06:18 compute-0 sudo[133241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:18 compute-0 python3.9[133243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:18 compute-0 sudo[133241]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:19 compute-0 sudo[133364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fielhknndwjvgkyqryfdudxtnkvapoxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846778.2666686-342-71039883908679/AnsiballZ_copy.py'
Jan 31 08:06:19 compute-0 sudo[133364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:19 compute-0 python3.9[133366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846778.2666686-342-71039883908679/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ff538dbcfa65d2a0e72b63d2920a0809a609b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:19 compute-0 sudo[133364]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:19 compute-0 sudo[133516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cigqkclqcmocehetplosilwznquznjlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846779.4781666-358-76043274285955/AnsiballZ_file.py'
Jan 31 08:06:19 compute-0 sudo[133516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:20 compute-0 python3.9[133518]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:20 compute-0 ceph-mon[75294]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:20 compute-0 sudo[133516]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:20 compute-0 sudo[133668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgqfcnmnedkxsvhzdiufozwhpxryhvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846780.1424637-366-194905049555840/AnsiballZ_stat.py'
Jan 31 08:06:20 compute-0 sudo[133668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:20 compute-0 python3.9[133670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:20 compute-0 sudo[133668]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:20 compute-0 sudo[133791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyvfjmznuoslvghklznudmkwnqnctzhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846780.1424637-366-194905049555840/AnsiballZ_copy.py'
Jan 31 08:06:20 compute-0 sudo[133791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:21 compute-0 python3.9[133793]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846780.1424637-366-194905049555840/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3ff538dbcfa65d2a0e72b63d2920a0809a609b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:21 compute-0 sudo[133791]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:21 compute-0 sshd-session[127688]: Connection closed by 192.168.122.30 port 44532
Jan 31 08:06:21 compute-0 sshd-session[127685]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:06:21 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 08:06:21 compute-0 systemd[1]: session-44.scope: Consumed 20.522s CPU time.
Jan 31 08:06:21 compute-0 systemd-logind[810]: Session 44 logged out. Waiting for processes to exit.
Jan 31 08:06:21 compute-0 systemd-logind[810]: Removed session 44.
Jan 31 08:06:22 compute-0 ceph-mon[75294]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:23 compute-0 ceph-mon[75294]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:25 compute-0 ceph-mon[75294]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:27 compute-0 sshd-session[133818]: Accepted publickey for zuul from 192.168.122.30 port 41724 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:06:27 compute-0 systemd-logind[810]: New session 45 of user zuul.
Jan 31 08:06:27 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 31 08:06:27 compute-0 sshd-session[133818]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:06:27 compute-0 sudo[133971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakcdxgjvlkpjwrhfhslmrnozbxikwxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846787.4913416-17-157780180241507/AnsiballZ_file.py'
Jan 31 08:06:27 compute-0 sudo[133971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:28 compute-0 ceph-mon[75294]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:28 compute-0 python3.9[133973]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:28 compute-0 sudo[133971]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:06:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 2070 writes, 9258 keys, 2070 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2070 writes, 2070 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2070 writes, 9258 keys, 2070 commit groups, 1.0 writes per commit group, ingest: 11.89 MB, 0.02 MB/s
                                           Interval WAL: 2070 writes, 2070 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     67.5      0.13              0.01         3    0.042       0      0       0.0       0.0
                                             L6      1/0    6.85 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.7    138.4    123.4      0.12              0.03         2    0.058    7253    747       0.0       0.0
                                            Sum      1/0    6.85 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     65.9     94.0      0.24              0.05         5    0.049    7253    747       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     85.5    121.8      0.19              0.05         4    0.047    7253    747       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    138.4    123.4      0.12              0.03         2    0.058    7253    747       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    119.3      0.07              0.01         2    0.036       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cc8bf858d0#2 capacity: 308.00 MB usage: 720.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(39,629.34 KB,0.199543%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,62.94 KB,0.0199553%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:06:28 compute-0 sudo[134123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjonsasrsphsnuncjezsjidylbyzbfbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846788.2694385-29-258676868581737/AnsiballZ_stat.py'
Jan 31 08:06:28 compute-0 sudo[134123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:28 compute-0 python3.9[134125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:28 compute-0 sudo[134123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:29 compute-0 sudo[134246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsgklhddukwjbftmjutcheawooamoiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846788.2694385-29-258676868581737/AnsiballZ_copy.py'
Jan 31 08:06:29 compute-0 sudo[134246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:29 compute-0 python3.9[134248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846788.2694385-29-258676868581737/.source.conf _original_basename=ceph.conf follow=False checksum=98f55df996874c8a6a982fa95afee2344411634c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:29 compute-0 sudo[134246]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:29 compute-0 sudo[134398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejnpnhjszyhrmogjuzjlbvpkmwadnue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846789.6528935-29-67910943657313/AnsiballZ_stat.py'
Jan 31 08:06:29 compute-0 sudo[134398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:30 compute-0 ceph-mon[75294]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:30 compute-0 python3.9[134400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:30 compute-0 sudo[134398]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:30 compute-0 sudo[134521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crpxjobzojsafcehsdhwdskrnwtpufum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846789.6528935-29-67910943657313/AnsiballZ_copy.py'
Jan 31 08:06:30 compute-0 sudo[134521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:30 compute-0 python3.9[134523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846789.6528935-29-67910943657313/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=99483649f550cfa7541e42c2cedbbe9e650453a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:30 compute-0 sudo[134521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:30 compute-0 sshd-session[133821]: Connection closed by 192.168.122.30 port 41724
Jan 31 08:06:30 compute-0 sshd-session[133818]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:06:30 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 08:06:30 compute-0 systemd[1]: session-45.scope: Consumed 2.360s CPU time.
Jan 31 08:06:30 compute-0 systemd-logind[810]: Session 45 logged out. Waiting for processes to exit.
Jan 31 08:06:30 compute-0 systemd-logind[810]: Removed session 45.
Jan 31 08:06:32 compute-0 ceph-mon[75294]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:34 compute-0 ceph-mon[75294]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:36 compute-0 ceph-mon[75294]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:36 compute-0 sshd-session[134548]: Accepted publickey for zuul from 192.168.122.30 port 60426 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:06:36 compute-0 systemd-logind[810]: New session 46 of user zuul.
Jan 31 08:06:36 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 31 08:06:36 compute-0 sshd-session[134548]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:06:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:37 compute-0 python3.9[134701]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:06:38 compute-0 ceph-mon[75294]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:38 compute-0 sudo[134855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezqwjcolidalxhsgxphknhbvcwmkjsbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846797.977395-29-7938452527683/AnsiballZ_file.py'
Jan 31 08:06:38 compute-0 sudo[134855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:38 compute-0 python3.9[134857]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:38 compute-0 sudo[134855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:39 compute-0 sudo[135007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkltflzrhugeahsmmmcgjmzlwwlzcnho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846798.8103468-29-58414243395237/AnsiballZ_file.py'
Jan 31 08:06:39 compute-0 sudo[135007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:39 compute-0 python3.9[135009]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:06:39 compute-0 sudo[135007]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:39 compute-0 python3.9[135159]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:06:40 compute-0 ceph-mon[75294]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:40 compute-0 sudo[135309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkvhgbeeesoxciujeysczwemgfivqwka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846800.1029332-52-203583408391789/AnsiballZ_seboolean.py'
Jan 31 08:06:40 compute-0 sudo[135309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:40 compute-0 python3.9[135311]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 08:06:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:41 compute-0 ceph-mon[75294]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:41 compute-0 sudo[135309]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:42 compute-0 sudo[135465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggfnjlsszntqiztppulwlybbripoqaag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846801.9576917-62-41055836996555/AnsiballZ_setup.py'
Jan 31 08:06:42 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 08:06:42 compute-0 sudo[135465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:42 compute-0 python3.9[135467]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:06:42 compute-0 sudo[135465]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:43 compute-0 sudo[135549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghpdtzfkcoypdgtgbpveeewqsrjdwoph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846801.9576917-62-41055836996555/AnsiballZ_dnf.py'
Jan 31 08:06:43 compute-0 sudo[135549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:43 compute-0 python3.9[135551]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:06:44 compute-0 ceph-mon[75294]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:44 compute-0 sudo[135549]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:45 compute-0 ceph-mon[75294]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:45 compute-0 sudo[135629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:45 compute-0 sudo[135629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:45 compute-0 sudo[135629]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:45 compute-0 sudo[135654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:06:45 compute-0 sudo[135654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:45 compute-0 sudo[135766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwlufumsqqmndystbebtkbehubnpikan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846804.9540458-74-24480648022068/AnsiballZ_systemd.py'
Jan 31 08:06:45 compute-0 sudo[135766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:45 compute-0 sudo[135654]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:06:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:06:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:06:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:06:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:45 compute-0 python3.9[135768]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:06:45 compute-0 sudo[135786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:45 compute-0 sudo[135786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:45 compute-0 sudo[135786]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:45 compute-0 sudo[135766]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:45 compute-0 sudo[135814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:06:45 compute-0 sudo[135814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:06:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:06:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:06:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:06:46 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:46 compute-0 podman[135928]: 2026-01-31 08:06:46.207724814 +0000 UTC m=+0.044620083 container create a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:06:46 compute-0 systemd[1]: Started libpod-conmon-a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e.scope.
Jan 31 08:06:46 compute-0 podman[135928]: 2026-01-31 08:06:46.183917005 +0000 UTC m=+0.020812354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:46 compute-0 podman[135928]: 2026-01-31 08:06:46.297685508 +0000 UTC m=+0.134580857 container init a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:06:46 compute-0 podman[135928]: 2026-01-31 08:06:46.306872114 +0000 UTC m=+0.143767423 container start a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:46 compute-0 podman[135928]: 2026-01-31 08:06:46.310965623 +0000 UTC m=+0.147861002 container attach a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:06:46 compute-0 pedantic_kalam[135944]: 167 167
Jan 31 08:06:46 compute-0 systemd[1]: libpod-a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 conmon[135944]: conmon a6deb6a174dbc2ecc2ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e.scope/container/memory.events
Jan 31 08:06:46 compute-0 podman[135928]: 2026-01-31 08:06:46.313797134 +0000 UTC m=+0.150692443 container died a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9689d356a6d50f00ce6d2ff01d1971c9505f6a3d82961d01cc2b7cb45595e41-merged.mount: Deactivated successfully.
Jan 31 08:06:46 compute-0 podman[135928]: 2026-01-31 08:06:46.361861725 +0000 UTC m=+0.198757014 container remove a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:06:46 compute-0 systemd[1]: libpod-conmon-a6deb6a174dbc2ecc2ca8941cf36ea11f23d1a3785ef49c44d798260ad16211e.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 sudo[136048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdcgbxgwcffwbuffdzedlclexphziazl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846806.071791-82-56539660197469/AnsiballZ_edpm_nftables_snippet.py'
Jan 31 08:06:46 compute-0 sudo[136048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:46 compute-0 podman[136024]: 2026-01-31 08:06:46.526332686 +0000 UTC m=+0.055737684 container create 77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:46 compute-0 systemd[1]: Started libpod-conmon-77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036.scope.
Jan 31 08:06:46 compute-0 podman[136024]: 2026-01-31 08:06:46.499569811 +0000 UTC m=+0.028974919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd5792f1503ac8eed84efb4e1351071729075081174616d14c2d760962a6d68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd5792f1503ac8eed84efb4e1351071729075081174616d14c2d760962a6d68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd5792f1503ac8eed84efb4e1351071729075081174616d14c2d760962a6d68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd5792f1503ac8eed84efb4e1351071729075081174616d14c2d760962a6d68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd5792f1503ac8eed84efb4e1351071729075081174616d14c2d760962a6d68/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 podman[136024]: 2026-01-31 08:06:46.624569949 +0000 UTC m=+0.153975027 container init 77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:06:46 compute-0 podman[136024]: 2026-01-31 08:06:46.631629014 +0000 UTC m=+0.161034062 container start 77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:06:46 compute-0 podman[136024]: 2026-01-31 08:06:46.637159784 +0000 UTC m=+0.166564882 container attach 77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:06:46 compute-0 python3[136056]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 08:06:46 compute-0 sudo[136048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:47 compute-0 magical_raman[136060]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:06:47 compute-0 magical_raman[136060]: --> All data devices are unavailable
Jan 31 08:06:47 compute-0 ceph-mon[75294]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:47 compute-0 systemd[1]: libpod-77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036.scope: Deactivated successfully.
Jan 31 08:06:47 compute-0 conmon[136060]: conmon 77d3f7d8270dc4b170a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036.scope/container/memory.events
Jan 31 08:06:47 compute-0 podman[136024]: 2026-01-31 08:06:47.133291283 +0000 UTC m=+0.662696321 container died 77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cd5792f1503ac8eed84efb4e1351071729075081174616d14c2d760962a6d68-merged.mount: Deactivated successfully.
Jan 31 08:06:47 compute-0 podman[136024]: 2026-01-31 08:06:47.190626063 +0000 UTC m=+0.720031061 container remove 77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:06:47 compute-0 systemd[1]: libpod-conmon-77d3f7d8270dc4b170a459bbcc118a91976a91f18b990c8931ef69e97f7e8036.scope: Deactivated successfully.
Jan 31 08:06:47 compute-0 sudo[135814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-0 sudo[136246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whzfovymkuwsqpgxrqdkcoajohqjprmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846807.0078032-91-245310696140132/AnsiballZ_file.py'
Jan 31 08:06:47 compute-0 sudo[136246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:47 compute-0 sudo[136238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:47 compute-0 sudo[136238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-0 sudo[136238]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-0 sudo[136269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:06:47 compute-0 sudo[136269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-0 python3.9[136266]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:47 compute-0 sudo[136246]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-0 podman[136349]: 2026-01-31 08:06:47.64416686 +0000 UTC m=+0.044641553 container create edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:06:47 compute-0 systemd[1]: Started libpod-conmon-edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e.scope.
Jan 31 08:06:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:47 compute-0 podman[136349]: 2026-01-31 08:06:47.625704986 +0000 UTC m=+0.026179709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:47 compute-0 podman[136349]: 2026-01-31 08:06:47.723580929 +0000 UTC m=+0.124055632 container init edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:06:47 compute-0 podman[136349]: 2026-01-31 08:06:47.735250176 +0000 UTC m=+0.135724869 container start edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:47 compute-0 podman[136349]: 2026-01-31 08:06:47.738644604 +0000 UTC m=+0.139119327 container attach edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:06:47 compute-0 amazing_neumann[136399]: 167 167
Jan 31 08:06:47 compute-0 systemd[1]: libpod-edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e.scope: Deactivated successfully.
Jan 31 08:06:47 compute-0 conmon[136399]: conmon edc6b46ee3cca69e752e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e.scope/container/memory.events
Jan 31 08:06:47 compute-0 podman[136349]: 2026-01-31 08:06:47.740715965 +0000 UTC m=+0.141190658 container died edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f6049ee6ae479105dad621862a554f931af08be8702cbdb3ecc85edcd5bf4f4-merged.mount: Deactivated successfully.
Jan 31 08:06:47 compute-0 podman[136349]: 2026-01-31 08:06:47.779399584 +0000 UTC m=+0.179874327 container remove edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:47 compute-0 systemd[1]: libpod-conmon-edc6b46ee3cca69e752e0c9280b05f44443214f5d739c3ac1e747ced76f3926e.scope: Deactivated successfully.
Jan 31 08:06:47 compute-0 podman[136446]: 2026-01-31 08:06:47.958966412 +0000 UTC m=+0.054105008 container create 6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:06:48 compute-0 systemd[1]: Started libpod-conmon-6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0.scope.
Jan 31 08:06:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:48 compute-0 podman[136446]: 2026-01-31 08:06:47.939352464 +0000 UTC m=+0.034491080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ae3a4e4a2d5557b6c7a368d0f830a5242f4e43aab6ff97c75e0d37e7be44c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ae3a4e4a2d5557b6c7a368d0f830a5242f4e43aab6ff97c75e0d37e7be44c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ae3a4e4a2d5557b6c7a368d0f830a5242f4e43aab6ff97c75e0d37e7be44c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2ae3a4e4a2d5557b6c7a368d0f830a5242f4e43aab6ff97c75e0d37e7be44c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:48 compute-0 podman[136446]: 2026-01-31 08:06:48.051019155 +0000 UTC m=+0.146157801 container init 6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ritchie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:06:48 compute-0 podman[136446]: 2026-01-31 08:06:48.057168604 +0000 UTC m=+0.152307240 container start 6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:48 compute-0 podman[136446]: 2026-01-31 08:06:48.06153959 +0000 UTC m=+0.156678236 container attach 6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ritchie, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:48 compute-0 sudo[136516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvvqwpcbbbmgyimqpewvxlpwizabvzaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846807.5977068-99-201314643150726/AnsiballZ_stat.py'
Jan 31 08:06:48 compute-0 sudo[136516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:48 compute-0 python3.9[136520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:48 compute-0 sudo[136516]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:48 compute-0 determined_ritchie[136487]: {
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:     "0": [
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:         {
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "devices": [
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "/dev/loop3"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             ],
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_name": "ceph_lv0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_size": "21470642176",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "name": "ceph_lv0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "tags": {
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cluster_name": "ceph",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.crush_device_class": "",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.encrypted": "0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.objectstore": "bluestore",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osd_id": "0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.type": "block",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.vdo": "0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.with_tpm": "0"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             },
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "type": "block",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "vg_name": "ceph_vg0"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:         }
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:     ],
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:     "1": [
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:         {
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "devices": [
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "/dev/loop4"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             ],
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_name": "ceph_lv1",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_size": "21470642176",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "name": "ceph_lv1",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "tags": {
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cluster_name": "ceph",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.crush_device_class": "",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.encrypted": "0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.objectstore": "bluestore",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osd_id": "1",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.type": "block",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.vdo": "0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.with_tpm": "0"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             },
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "type": "block",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "vg_name": "ceph_vg1"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:         }
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:     ],
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:     "2": [
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:         {
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "devices": [
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "/dev/loop5"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             ],
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_name": "ceph_lv2",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_size": "21470642176",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "name": "ceph_lv2",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "tags": {
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.cluster_name": "ceph",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.crush_device_class": "",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.encrypted": "0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.objectstore": "bluestore",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osd_id": "2",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.type": "block",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.vdo": "0",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:                 "ceph.with_tpm": "0"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             },
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "type": "block",
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:             "vg_name": "ceph_vg2"
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:         }
Jan 31 08:06:48 compute-0 determined_ritchie[136487]:     ]
Jan 31 08:06:48 compute-0 determined_ritchie[136487]: }
Jan 31 08:06:48 compute-0 systemd[1]: libpod-6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[136446]: 2026-01-31 08:06:48.443256969 +0000 UTC m=+0.538395565 container died 6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed2ae3a4e4a2d5557b6c7a368d0f830a5242f4e43aab6ff97c75e0d37e7be44c-merged.mount: Deactivated successfully.
Jan 31 08:06:48 compute-0 sudo[136612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pomiwejgzknmfkuquahpkvwvvglqvcpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846807.5977068-99-201314643150726/AnsiballZ_file.py'
Jan 31 08:06:48 compute-0 podman[136446]: 2026-01-31 08:06:48.493168873 +0000 UTC m=+0.588307509 container remove 6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ritchie, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 31 08:06:48 compute-0 sudo[136612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:48 compute-0 systemd[1]: libpod-conmon-6442386f8321f6e4f2bb977b0f168a3573d2a2388d822cf7e0b907e885a956b0.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 sudo[136269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:48 compute-0 sudo[136615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:48 compute-0 sudo[136615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:48 compute-0 sudo[136615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:48 compute-0 sudo[136640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:06:48 compute-0 sudo[136640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:48 compute-0 python3.9[136614]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:48 compute-0 sudo[136612]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:49 compute-0 podman[136754]: 2026-01-31 08:06:49.003895216 +0000 UTC m=+0.056975131 container create 13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:06:49 compute-0 systemd[1]: Started libpod-conmon-13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0.scope.
Jan 31 08:06:49 compute-0 podman[136754]: 2026-01-31 08:06:48.981761455 +0000 UTC m=+0.034841390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:49 compute-0 podman[136754]: 2026-01-31 08:06:49.096585568 +0000 UTC m=+0.149665513 container init 13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:49 compute-0 podman[136754]: 2026-01-31 08:06:49.106134555 +0000 UTC m=+0.159214470 container start 13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:06:49 compute-0 cool_kowalevski[136816]: 167 167
Jan 31 08:06:49 compute-0 systemd[1]: libpod-13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0.scope: Deactivated successfully.
Jan 31 08:06:49 compute-0 conmon[136816]: conmon 13ac83b8f817b3d3187a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0.scope/container/memory.events
Jan 31 08:06:49 compute-0 podman[136754]: 2026-01-31 08:06:49.113751925 +0000 UTC m=+0.166831870 container attach 13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:06:49 compute-0 podman[136754]: 2026-01-31 08:06:49.114740064 +0000 UTC m=+0.167819999 container died 13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e58134565bddd77451ab6ebc350247e2a027b825e1900ee8cfaef5e10e825bc-merged.mount: Deactivated successfully.
Jan 31 08:06:49 compute-0 sudo[136854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnlqvcukvivqkrbpnteoefrzrtrmcthi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846808.8691843-111-40489254655136/AnsiballZ_stat.py'
Jan 31 08:06:49 compute-0 sudo[136854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:49 compute-0 podman[136754]: 2026-01-31 08:06:49.173892185 +0000 UTC m=+0.226972100 container remove 13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:49 compute-0 systemd[1]: libpod-conmon-13ac83b8f817b3d3187a9c549d84f900a6f43148edb7427040e67927cd2f4fc0.scope: Deactivated successfully.
Jan 31 08:06:49 compute-0 podman[136869]: 2026-01-31 08:06:49.318994036 +0000 UTC m=+0.036198449 container create b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mcclintock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:49 compute-0 python3.9[136860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:49 compute-0 systemd[1]: Started libpod-conmon-b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e.scope.
Jan 31 08:06:49 compute-0 sudo[136854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a90ee2d475dbc4d9b9577b19ebcf83545036cacfd055b8678ae71d7762212717/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a90ee2d475dbc4d9b9577b19ebcf83545036cacfd055b8678ae71d7762212717/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a90ee2d475dbc4d9b9577b19ebcf83545036cacfd055b8678ae71d7762212717/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a90ee2d475dbc4d9b9577b19ebcf83545036cacfd055b8678ae71d7762212717/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 podman[136869]: 2026-01-31 08:06:49.301642894 +0000 UTC m=+0.018847307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:49 compute-0 podman[136869]: 2026-01-31 08:06:49.422139721 +0000 UTC m=+0.139344134 container init b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mcclintock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:06:49 compute-0 podman[136869]: 2026-01-31 08:06:49.436305411 +0000 UTC m=+0.153509834 container start b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mcclintock, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:06:49 compute-0 podman[136869]: 2026-01-31 08:06:49.445664062 +0000 UTC m=+0.162868475 container attach b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 31 08:06:49 compute-0 sudo[136967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjjlmyelnmuzfjbmgwhpudjwlcymaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846808.8691843-111-40489254655136/AnsiballZ_file.py'
Jan 31 08:06:49 compute-0 sudo[136967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:49 compute-0 python3.9[136978]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.367z4de5 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:49 compute-0 sudo[136967]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:49 compute-0 ceph-mon[75294]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:50 compute-0 lvm[137141]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:06:50 compute-0 lvm[137142]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:06:50 compute-0 lvm[137142]: VG ceph_vg1 finished
Jan 31 08:06:50 compute-0 lvm[137141]: VG ceph_vg0 finished
Jan 31 08:06:50 compute-0 lvm[137144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:06:50 compute-0 lvm[137144]: VG ceph_vg2 finished
Jan 31 08:06:50 compute-0 sudo[137197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojsnntiuqwpzefnfqrwoknjcwwpglbds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846809.9565432-123-137193950907847/AnsiballZ_stat.py'
Jan 31 08:06:50 compute-0 sudo[137197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:50 compute-0 nifty_mcclintock[136888]: {}
Jan 31 08:06:50 compute-0 systemd[1]: libpod-b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e.scope: Deactivated successfully.
Jan 31 08:06:50 compute-0 systemd[1]: libpod-b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e.scope: Consumed 1.116s CPU time.
Jan 31 08:06:50 compute-0 podman[136869]: 2026-01-31 08:06:50.237769608 +0000 UTC m=+0.954974001 container died b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mcclintock, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a90ee2d475dbc4d9b9577b19ebcf83545036cacfd055b8678ae71d7762212717-merged.mount: Deactivated successfully.
Jan 31 08:06:50 compute-0 podman[136869]: 2026-01-31 08:06:50.317339121 +0000 UTC m=+1.034543514 container remove b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mcclintock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:06:50 compute-0 systemd[1]: libpod-conmon-b810e3d6c85d643c7b52dc902f1c0d210fd05a81aec6d1e0a6ca88ddcc0a6d5e.scope: Deactivated successfully.
Jan 31 08:06:50 compute-0 sudo[136640]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:50 compute-0 python3.9[137199]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:06:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:06:50 compute-0 sudo[137197]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[137216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:06:50 compute-0 sudo[137216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[137216]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[137314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejjsdflwfinhkbjbvoeqhfhyxyekvtlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846809.9565432-123-137193950907847/AnsiballZ_file.py'
Jan 31 08:06:50 compute-0 sudo[137314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:06:50
Jan 31 08:06:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:06:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:06:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root']
Jan 31 08:06:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:06:50 compute-0 python3.9[137316]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:50 compute-0 sudo[137314]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:06:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:06:51 compute-0 ceph-mon[75294]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:51 compute-0 sudo[137466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkrfxbtocvkgcrejrsittzztjrulhqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846811.0291703-136-217736086489053/AnsiballZ_command.py'
Jan 31 08:06:51 compute-0 sudo[137466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:51 compute-0 python3.9[137468]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:51 compute-0 sudo[137466]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 sudo[137619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqipwftuezxpnpcdlnpzqouxcwshhwdy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846811.7595813-144-222773727962689/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 08:06:52 compute-0 sudo[137619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:52 compute-0 python3[137621]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 08:06:52 compute-0 sudo[137619]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 sudo[137771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhvsniscnxsdnawsddsfqmbwbttqzkss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846812.5404088-152-206377116611633/AnsiballZ_stat.py'
Jan 31 08:06:52 compute-0 sudo[137771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.002576) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846813002712, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 832, "num_deletes": 251, "total_data_size": 1158450, "memory_usage": 1184448, "flush_reason": "Manual Compaction"}
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846813011141, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1137456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9045, "largest_seqno": 9876, "table_properties": {"data_size": 1133263, "index_size": 1909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8918, "raw_average_key_size": 18, "raw_value_size": 1124836, "raw_average_value_size": 2358, "num_data_blocks": 89, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846740, "oldest_key_time": 1769846740, "file_creation_time": 1769846813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8605 microseconds, and 3805 cpu microseconds.
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.011207) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1137456 bytes OK
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.011240) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.012667) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.012689) EVENT_LOG_v1 {"time_micros": 1769846813012683, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.012717) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1154319, prev total WAL file size 1154319, number of live WAL files 2.
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.013276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1110KB)], [23(7017KB)]
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846813013346, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8323312, "oldest_snapshot_seqno": -1}
Jan 31 08:06:53 compute-0 python3.9[137773]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3381 keys, 6726860 bytes, temperature: kUnknown
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846813052728, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6726860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6701417, "index_size": 15862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81848, "raw_average_key_size": 24, "raw_value_size": 6637480, "raw_average_value_size": 1963, "num_data_blocks": 689, "num_entries": 3381, "num_filter_entries": 3381, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769846813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.052954) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6726860 bytes
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.059786) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.9 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.9 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 3895, records dropped: 514 output_compression: NoCompression
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.059813) EVENT_LOG_v1 {"time_micros": 1769846813059800, "job": 8, "event": "compaction_finished", "compaction_time_micros": 39459, "compaction_time_cpu_micros": 10525, "output_level": 6, "num_output_files": 1, "total_output_size": 6726860, "num_input_records": 3895, "num_output_records": 3381, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846813060051, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846813060640, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.013220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.060680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.060684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.060686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.060687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:06:53.060689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:53 compute-0 sudo[137771]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:53 compute-0 sudo[137896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxtjoiugpdwfyfxntggmmsrndjksnnrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846812.5404088-152-206377116611633/AnsiballZ_copy.py'
Jan 31 08:06:53 compute-0 sudo[137896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:53 compute-0 python3.9[137898]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846812.5404088-152-206377116611633/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:53 compute-0 sudo[137896]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:54 compute-0 ceph-mon[75294]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:54 compute-0 sudo[138048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlirguppjqaokoxnirzasglyjcsafvfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846813.8528273-167-103179809751426/AnsiballZ_stat.py'
Jan 31 08:06:54 compute-0 sudo[138048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:54 compute-0 python3.9[138050]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:54 compute-0 sudo[138048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:54 compute-0 sudo[138173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iujscfcnozvdytqswiksbgtplouvunxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846813.8528273-167-103179809751426/AnsiballZ_copy.py'
Jan 31 08:06:54 compute-0 sudo[138173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:54 compute-0 python3.9[138175]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846813.8528273-167-103179809751426/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:54 compute-0 sudo[138173]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:55 compute-0 sudo[138325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwdjndvgbofcmmpyebpuufijdkdtgcus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846814.9981763-182-100298271333226/AnsiballZ_stat.py'
Jan 31 08:06:55 compute-0 sudo[138325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:55 compute-0 python3.9[138327]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:55 compute-0 sudo[138325]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:06:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:06:55 compute-0 sudo[138450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfdtkqjmddqhkhfqjvpscobdzbcccjlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846814.9981763-182-100298271333226/AnsiballZ_copy.py'
Jan 31 08:06:55 compute-0 sudo[138450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:56 compute-0 ceph-mon[75294]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:56 compute-0 python3.9[138452]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846814.9981763-182-100298271333226/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:56 compute-0 sudo[138450]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:56 compute-0 sudo[138602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttxllowejxqdcediwfrqflgygdpsylqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846816.3396826-197-211951251455385/AnsiballZ_stat.py'
Jan 31 08:06:56 compute-0 sudo[138602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:56 compute-0 python3.9[138604]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:56 compute-0 sudo[138602]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:57 compute-0 sudo[138727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbhqtybahjliswfdqafggwxyqxbufjdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846816.3396826-197-211951251455385/AnsiballZ_copy.py'
Jan 31 08:06:57 compute-0 sudo[138727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:57 compute-0 python3.9[138729]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846816.3396826-197-211951251455385/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:57 compute-0 sudo[138727]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:57 compute-0 ceph-mon[75294]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:58 compute-0 sudo[138879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwlhfewojmsiauospoeqnrmvnjwzffa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846817.756429-212-104556795833860/AnsiballZ_stat.py'
Jan 31 08:06:58 compute-0 sudo[138879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:58 compute-0 python3.9[138881]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:06:58 compute-0 sudo[138879]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:58 compute-0 sudo[139004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irkkuwrcqtzgwwjnrhjcnisjjirbpyup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846817.756429-212-104556795833860/AnsiballZ_copy.py'
Jan 31 08:06:58 compute-0 sudo[139004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:58 compute-0 python3.9[139006]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846817.756429-212-104556795833860/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:58 compute-0 sudo[139004]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:06:59 compute-0 sudo[139156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtagmngjabhryjftypmrcsulnlvywdkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846818.9082055-227-210022771626761/AnsiballZ_file.py'
Jan 31 08:06:59 compute-0 sudo[139156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:59 compute-0 python3.9[139158]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:06:59 compute-0 sudo[139156]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:59 compute-0 sudo[139308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpxwzwzyyjybnbnhljxiggbbdnheazqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846819.51566-235-204736747476432/AnsiballZ_command.py'
Jan 31 08:06:59 compute-0 sudo[139308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:59 compute-0 python3.9[139310]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:59 compute-0 sudo[139308]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:00 compute-0 ceph-mon[75294]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:00 compute-0 sudo[139463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdxoxernnnwghzcendrbvgmchyzmhvng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846820.102387-243-16390513040025/AnsiballZ_blockinfile.py'
Jan 31 08:07:00 compute-0 sudo[139463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:00 compute-0 python3.9[139465]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:00 compute-0 sudo[139463]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:01 compute-0 sudo[139615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjbpphlydncdnxfyeplqdssquozzlnvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846820.8739407-252-85388608800597/AnsiballZ_command.py'
Jan 31 08:07:01 compute-0 sudo[139615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:01 compute-0 python3.9[139617]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:01 compute-0 ceph-mon[75294]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:01 compute-0 sudo[139615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:01 compute-0 sudo[139768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmttfkznudasjizqkgzelznxcctfiwyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846821.487826-260-277606904835874/AnsiballZ_stat.py'
Jan 31 08:07:01 compute-0 sudo[139768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:01 compute-0 python3.9[139770]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:07:01 compute-0 sudo[139768]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:02 compute-0 sudo[139922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqiideqczqrucblaghyjtrkpfzwlemkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846822.1129546-268-202989456966229/AnsiballZ_command.py'
Jan 31 08:07:02 compute-0 sudo[139922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:02 compute-0 python3.9[139924]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:02 compute-0 sudo[139922]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:02 compute-0 sudo[140077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-playqrhsmrcentexocivwfdfhiswvpho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846822.7736473-276-69230894098189/AnsiballZ_file.py'
Jan 31 08:07:02 compute-0 sudo[140077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:03 compute-0 python3.9[140079]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:03 compute-0 sudo[140077]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:04 compute-0 ceph-mon[75294]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:04 compute-0 python3.9[140229]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:07:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:05 compute-0 sudo[140380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyyashllijcweamwnqjfxzcsgbnobjuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846825.0271244-316-242785829541458/AnsiballZ_command.py'
Jan 31 08:07:05 compute-0 sudo[140380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:05 compute-0 python3.9[140382]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:05 compute-0 ovs-vsctl[140383]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 08:07:05 compute-0 sudo[140380]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:05 compute-0 sudo[140533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utldubtgatgqthsxhkagdhjjomfgtcgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846825.7095275-325-115610052672008/AnsiballZ_command.py'
Jan 31 08:07:05 compute-0 sudo[140533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:06 compute-0 python3.9[140535]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:06 compute-0 ceph-mon[75294]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:07:06 compute-0 sudo[140533]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:06 compute-0 sudo[140688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgoqulcazrdwwckwsojdvsqcxmulbskq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846826.3914173-333-8321337587350/AnsiballZ_command.py'
Jan 31 08:07:06 compute-0 sudo[140688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:06 compute-0 python3.9[140690]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:06 compute-0 ovs-vsctl[140691]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 08:07:06 compute-0 sudo[140688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:07 compute-0 ceph-mon[75294]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:07 compute-0 python3.9[140841]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:07:07 compute-0 sudo[140993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdvxiypmvoexqhafjntziogkykrjoyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846827.4996397-350-95510131293267/AnsiballZ_file.py'
Jan 31 08:07:07 compute-0 sudo[140993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:08 compute-0 python3.9[140995]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:08 compute-0 sudo[140993]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:08 compute-0 sudo[141145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcwpdsoxtfnqkoailqbaquoyhsyulgem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846828.2481153-358-276279655411219/AnsiballZ_stat.py'
Jan 31 08:07:08 compute-0 sudo[141145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:08 compute-0 python3.9[141147]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:08 compute-0 sudo[141145]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:08 compute-0 sudo[141223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irplrdegpjfzylmudvsbmwvxutvvgqcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846828.2481153-358-276279655411219/AnsiballZ_file.py'
Jan 31 08:07:08 compute-0 sudo[141223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:09 compute-0 python3.9[141225]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:09 compute-0 sudo[141223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:09 compute-0 sudo[141375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxqwwrpualgaqndcbhyijunuiajuryzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846829.201362-358-247305615980725/AnsiballZ_stat.py'
Jan 31 08:07:09 compute-0 sudo[141375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:09 compute-0 python3.9[141377]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:09 compute-0 sudo[141375]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:09 compute-0 sudo[141453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcszggfmjsinrpspziifwuxsmiplowen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846829.201362-358-247305615980725/AnsiballZ_file.py'
Jan 31 08:07:09 compute-0 sudo[141453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:10 compute-0 ceph-mon[75294]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:10 compute-0 python3.9[141455]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:10 compute-0 sudo[141453]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:10 compute-0 sudo[141605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vugmfajvzdpeeekucncrfkeofkaadzat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846830.181046-381-6717527611860/AnsiballZ_file.py'
Jan 31 08:07:10 compute-0 sudo[141605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:10 compute-0 python3.9[141607]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:10 compute-0 sudo[141605]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:11 compute-0 sudo[141757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwiobhkvtfxvguqxkmlswgrnrlupzmfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846830.7994945-389-112138034390287/AnsiballZ_stat.py'
Jan 31 08:07:11 compute-0 sudo[141757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:11 compute-0 python3.9[141759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:11 compute-0 sudo[141757]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:11 compute-0 sudo[141835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gimoreqaeaxdchdhnutndpzjgcpnicmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846830.7994945-389-112138034390287/AnsiballZ_file.py'
Jan 31 08:07:11 compute-0 sudo[141835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:11 compute-0 python3.9[141837]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:11 compute-0 sudo[141835]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:12 compute-0 ceph-mon[75294]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:12 compute-0 sudo[141987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrekgsbfbybfublslytzskxrflgtpyqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846831.8169403-401-245305517443584/AnsiballZ_stat.py'
Jan 31 08:07:12 compute-0 sudo[141987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:12 compute-0 python3.9[141989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:12 compute-0 sudo[141987]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:12 compute-0 sudo[142065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyahzbxbvsasuontipsqtxnbbyapeqqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846831.8169403-401-245305517443584/AnsiballZ_file.py'
Jan 31 08:07:12 compute-0 sudo[142065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:12 compute-0 python3.9[142067]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:12 compute-0 sudo[142065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:13 compute-0 sudo[142217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehebwypmcnsojohcqwgxumpbozmqvupv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846832.8188226-413-183208790812774/AnsiballZ_systemd.py'
Jan 31 08:07:13 compute-0 sudo[142217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:13 compute-0 python3.9[142219]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:07:13 compute-0 systemd[1]: Reloading.
Jan 31 08:07:13 compute-0 systemd-rc-local-generator[142243]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:13 compute-0 systemd-sysv-generator[142246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:13 compute-0 sudo[142217]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:14 compute-0 sudo[142407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezethjuddmkwocazohdqjizqzdafezup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846833.852691-421-47195290979556/AnsiballZ_stat.py'
Jan 31 08:07:14 compute-0 sudo[142407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:14 compute-0 ceph-mon[75294]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:14 compute-0 python3.9[142409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:14 compute-0 sudo[142407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:14 compute-0 sudo[142485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvapmcursuspkfpvxitloajzeqfmauoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846833.852691-421-47195290979556/AnsiballZ_file.py'
Jan 31 08:07:14 compute-0 sudo[142485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:14 compute-0 python3.9[142487]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:14 compute-0 sudo[142485]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:15 compute-0 sudo[142637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpukmiyznyjrrmzwjbwnvvlwzgkqcljz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846834.9234333-433-219314794691298/AnsiballZ_stat.py'
Jan 31 08:07:15 compute-0 sudo[142637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:15 compute-0 ceph-mon[75294]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:15 compute-0 python3.9[142639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:15 compute-0 sudo[142637]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:15 compute-0 sudo[142715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zytujssvzzpvuqktqlzselymznrgnkin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846834.9234333-433-219314794691298/AnsiballZ_file.py'
Jan 31 08:07:15 compute-0 sudo[142715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:15 compute-0 python3.9[142717]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:16 compute-0 sudo[142715]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:16 compute-0 sudo[142867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnpiiukleiqrabktprzccyssedczxqpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846836.1612313-445-173574142734929/AnsiballZ_systemd.py'
Jan 31 08:07:16 compute-0 sudo[142867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:16 compute-0 python3.9[142869]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:07:16 compute-0 systemd[1]: Reloading.
Jan 31 08:07:16 compute-0 systemd-sysv-generator[142897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:16 compute-0 systemd-rc-local-generator[142892]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:17 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 08:07:17 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 08:07:17 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 08:07:17 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 08:07:17 compute-0 sudo[142867]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:17 compute-0 sudo[143060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxvpqlyvflqcefqjrcojizoscvvdgfcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846837.426727-455-117786366180676/AnsiballZ_file.py'
Jan 31 08:07:17 compute-0 sudo[143060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:17 compute-0 python3.9[143062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:17 compute-0 sudo[143060]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:18 compute-0 ceph-mon[75294]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:18 compute-0 sudo[143212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrpnljwalnhlfaxagrhwmchotwhonprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846838.021248-463-24850655491487/AnsiballZ_stat.py'
Jan 31 08:07:18 compute-0 sudo[143212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:18 compute-0 python3.9[143214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:18 compute-0 sudo[143212]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:18 compute-0 sudo[143335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wclfqtoaioekasmcttjrbfvxrwmpufpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846838.021248-463-24850655491487/AnsiballZ_copy.py'
Jan 31 08:07:18 compute-0 sudo[143335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:19 compute-0 python3.9[143337]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846838.021248-463-24850655491487/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:19 compute-0 sudo[143335]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:19 compute-0 ceph-mon[75294]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:19 compute-0 sudo[143487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfjcydwlofgyfrlahhxytyupzrpjute ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846839.4727705-480-137753005514031/AnsiballZ_file.py'
Jan 31 08:07:19 compute-0 sudo[143487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:19 compute-0 python3.9[143489]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:19 compute-0 sudo[143487]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:20 compute-0 sudo[143639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avzpskzwjjtwewdzathbtimmfvaujgwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846840.04879-488-219475434396011/AnsiballZ_file.py'
Jan 31 08:07:20 compute-0 sudo[143639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:20 compute-0 python3.9[143641]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:20 compute-0 sudo[143639]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:20 compute-0 sudo[143791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkegzjoeqkqppmibluctcslwdhhhprhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846840.6915903-496-186855336952666/AnsiballZ_stat.py'
Jan 31 08:07:20 compute-0 sudo[143791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:21 compute-0 python3.9[143793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:21 compute-0 sudo[143791]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:21 compute-0 ceph-mon[75294]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:21 compute-0 sudo[143914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhnygmfluziiaktanpxlmfqwshsptrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846840.6915903-496-186855336952666/AnsiballZ_copy.py'
Jan 31 08:07:21 compute-0 sudo[143914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:21 compute-0 python3.9[143916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846840.6915903-496-186855336952666/.source.json _original_basename=.kf91nn6i follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:21 compute-0 sudo[143914]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:22 compute-0 python3.9[144066]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:24 compute-0 sudo[144487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnlmecvbrtfieaslktrjwyflszovkqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846843.6778243-536-181140332701841/AnsiballZ_container_config_data.py'
Jan 31 08:07:24 compute-0 sudo[144487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:24 compute-0 ceph-mon[75294]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:24 compute-0 python3.9[144489]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 08:07:24 compute-0 sudo[144487]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:25 compute-0 sudo[144639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmevirscoptytesxfzdchfpfhevjjxzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846844.6362429-547-187599202921096/AnsiballZ_container_config_hash.py'
Jan 31 08:07:25 compute-0 sudo[144639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:25 compute-0 python3.9[144641]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:07:25 compute-0 ceph-mon[75294]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:25 compute-0 sudo[144639]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:25 compute-0 sudo[144791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-japtrxbvvgtvujnbsxjnpipbkfudnhcr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846845.4667053-557-162094880164623/AnsiballZ_edpm_container_manage.py'
Jan 31 08:07:25 compute-0 sudo[144791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:26 compute-0 python3[144793]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:07:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:28 compute-0 ceph-mon[75294]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:29 compute-0 ceph-mon[75294]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:31 compute-0 ceph-mon[75294]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:31 compute-0 podman[144807]: 2026-01-31 08:07:31.42702086 +0000 UTC m=+5.205029710 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 08:07:31 compute-0 podman[144924]: 2026-01-31 08:07:31.58385658 +0000 UTC m=+0.069633767 container create c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:07:31 compute-0 podman[144924]: 2026-01-31 08:07:31.542348525 +0000 UTC m=+0.028125732 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 08:07:31 compute-0 python3[144793]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 08:07:31 compute-0 sudo[144791]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:32 compute-0 sudo[145113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyddjzdprgyywdzowuebchkxoheubwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846851.99266-565-20028543707097/AnsiballZ_stat.py'
Jan 31 08:07:32 compute-0 sudo[145113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:32 compute-0 python3.9[145115]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:07:32 compute-0 sudo[145113]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:32 compute-0 sudo[145267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fewdxumylljalygxsrjtfrswyanzjeqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846852.6410346-574-72961768404834/AnsiballZ_file.py'
Jan 31 08:07:32 compute-0 sudo[145267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:33 compute-0 python3.9[145269]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:33 compute-0 sudo[145267]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:33 compute-0 sudo[145343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phbghyspvqzhmwuhknbqifqznsevchgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846852.6410346-574-72961768404834/AnsiballZ_stat.py'
Jan 31 08:07:33 compute-0 sudo[145343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:33 compute-0 python3.9[145345]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:07:33 compute-0 sudo[145343]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:33 compute-0 sudo[145494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpjqitmxrijjcremoevjrzjglomwqjyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846853.5154376-574-111137592141441/AnsiballZ_copy.py'
Jan 31 08:07:33 compute-0 sudo[145494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:34 compute-0 ceph-mon[75294]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:34 compute-0 python3.9[145496]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846853.5154376-574-111137592141441/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:34 compute-0 sudo[145494]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:34 compute-0 sudo[145570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdhkdjjqufnowcuweelnpyzxdvqfypjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846853.5154376-574-111137592141441/AnsiballZ_systemd.py'
Jan 31 08:07:34 compute-0 sudo[145570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:34 compute-0 python3.9[145572]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:07:34 compute-0 systemd[1]: Reloading.
Jan 31 08:07:34 compute-0 systemd-sysv-generator[145603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:34 compute-0 systemd-rc-local-generator[145598]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:34 compute-0 sudo[145570]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:35 compute-0 sudo[145681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emuiswspznptfufacmorwoiczddoflsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846853.5154376-574-111137592141441/AnsiballZ_systemd.py'
Jan 31 08:07:35 compute-0 sudo[145681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:35 compute-0 python3.9[145683]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:07:35 compute-0 systemd[1]: Reloading.
Jan 31 08:07:35 compute-0 systemd-rc-local-generator[145710]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:35 compute-0 systemd-sysv-generator[145713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:35 compute-0 systemd[1]: Starting ovn_controller container...
Jan 31 08:07:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b431f792ebde7c132fa7e4c95052a496563a95123d882a321c1b9c6e7ce38e0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1.
Jan 31 08:07:35 compute-0 podman[145724]: 2026-01-31 08:07:35.861033251 +0000 UTC m=+0.152103331 container init c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 08:07:35 compute-0 ovn_controller[145740]: + sudo -E kolla_set_configs
Jan 31 08:07:35 compute-0 podman[145724]: 2026-01-31 08:07:35.883820594 +0000 UTC m=+0.174890654 container start c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:07:35 compute-0 edpm-start-podman-container[145724]: ovn_controller
Jan 31 08:07:35 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 31 08:07:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 08:07:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 08:07:35 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 31 08:07:35 compute-0 edpm-start-podman-container[145723]: Creating additional drop-in dependency for "ovn_controller" (c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1)
Jan 31 08:07:35 compute-0 systemd[145779]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 31 08:07:35 compute-0 podman[145747]: 2026-01-31 08:07:35.975351776 +0000 UTC m=+0.081236030 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:07:35 compute-0 systemd[1]: Reloading.
Jan 31 08:07:36 compute-0 ceph-mon[75294]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:36 compute-0 systemd-rc-local-generator[145825]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:36 compute-0 systemd-sysv-generator[145828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:36 compute-0 systemd[145779]: Queued start job for default target Main User Target.
Jan 31 08:07:36 compute-0 systemd[145779]: Created slice User Application Slice.
Jan 31 08:07:36 compute-0 systemd[145779]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 08:07:36 compute-0 systemd[145779]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:07:36 compute-0 systemd[145779]: Reached target Paths.
Jan 31 08:07:36 compute-0 systemd[145779]: Reached target Timers.
Jan 31 08:07:36 compute-0 systemd[145779]: Starting D-Bus User Message Bus Socket...
Jan 31 08:07:36 compute-0 systemd[145779]: Starting Create User's Volatile Files and Directories...
Jan 31 08:07:36 compute-0 systemd[145779]: Finished Create User's Volatile Files and Directories.
Jan 31 08:07:36 compute-0 systemd[145779]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:07:36 compute-0 systemd[145779]: Reached target Sockets.
Jan 31 08:07:36 compute-0 systemd[145779]: Reached target Basic System.
Jan 31 08:07:36 compute-0 systemd[145779]: Reached target Main User Target.
Jan 31 08:07:36 compute-0 systemd[145779]: Startup finished in 173ms.
Jan 31 08:07:36 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 31 08:07:36 compute-0 systemd[1]: c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1-4014707f80f443c5.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 08:07:36 compute-0 systemd[1]: c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1-4014707f80f443c5.service: Failed with result 'exit-code'.
Jan 31 08:07:36 compute-0 systemd[1]: Started ovn_controller container.
Jan 31 08:07:36 compute-0 systemd[1]: Started Session c1 of User root.
Jan 31 08:07:36 compute-0 sudo[145681]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:36 compute-0 ovn_controller[145740]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:07:36 compute-0 ovn_controller[145740]: INFO:__main__:Validating config file
Jan 31 08:07:36 compute-0 ovn_controller[145740]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:07:36 compute-0 ovn_controller[145740]: INFO:__main__:Writing out command to execute
Jan 31 08:07:36 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: ++ cat /run_command
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + ARGS=
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + sudo kolla_copy_cacerts
Jan 31 08:07:36 compute-0 systemd[1]: Started Session c2 of User root.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + [[ ! -n '' ]]
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + . kolla_extend_start
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 08:07:36 compute-0 ovn_controller[145740]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + umask 0022
Jan 31 08:07:36 compute-0 ovn_controller[145740]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 08:07:36 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.3802] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.3810] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <warn>  [1769846856.3812] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.3820] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.3826] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.3829] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 08:07:36 compute-0 kernel: br-int: entered promiscuous mode
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00013|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00014|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00015|main|INFO|OVS feature set changed, force recompute.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00016|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 08:07:36 compute-0 systemd-udevd[145872]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:07:36 compute-0 ovn_controller[145740]: 2026-01-31T08:07:36Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.4415] manager: (ovn-7c3eae-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 08:07:36 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 08:07:36 compute-0 systemd-udevd[145874]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.4616] device (genev_sys_6081): carrier: link connected
Jan 31 08:07:36 compute-0 NetworkManager[49077]: <info>  [1769846856.4620] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 08:07:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:37 compute-0 python3.9[146002]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 08:07:37 compute-0 sudo[146152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgvogdpdfsrsbqdvirdyvhlikbgwmdan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846857.5242376-619-266469611155736/AnsiballZ_stat.py'
Jan 31 08:07:37 compute-0 sudo[146152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:38 compute-0 ceph-mon[75294]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:38 compute-0 python3.9[146154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:38 compute-0 sudo[146152]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:38 compute-0 sudo[146275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjcwxrdjgcovuterpeqyiujzovxpxnof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846857.5242376-619-266469611155736/AnsiballZ_copy.py'
Jan 31 08:07:38 compute-0 sudo[146275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:38 compute-0 python3.9[146277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846857.5242376-619-266469611155736/.source.yaml _original_basename=.k355b1wa follow=False checksum=c677d5a1d9a8cec3570fad39293f30b192b27746 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:38 compute-0 sudo[146275]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:38 compute-0 sudo[146427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgixnrfazxfkdmssehziqaajxkjzbcax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846858.7410877-634-33829409799298/AnsiballZ_command.py'
Jan 31 08:07:38 compute-0 sudo[146427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:39 compute-0 python3.9[146429]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:39 compute-0 ovs-vsctl[146430]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 08:07:39 compute-0 sudo[146427]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:39 compute-0 sudo[146580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmudauwofpjxtgblpurakfbqyvcwvee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846859.40477-642-117899853552099/AnsiballZ_command.py'
Jan 31 08:07:39 compute-0 sudo[146580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:39 compute-0 python3.9[146582]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:39 compute-0 ovs-vsctl[146584]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 08:07:39 compute-0 sudo[146580]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:40 compute-0 ceph-mon[75294]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:40 compute-0 sudo[146735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvmpixeybbqjudrdojlsvimqpwbnouox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846860.1736841-656-231978802066300/AnsiballZ_command.py'
Jan 31 08:07:40 compute-0 sudo[146735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:40 compute-0 python3.9[146737]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:40 compute-0 ovs-vsctl[146738]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 08:07:40 compute-0 sudo[146735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:41 compute-0 sshd-session[134551]: Connection closed by 192.168.122.30 port 60426
Jan 31 08:07:41 compute-0 sshd-session[134548]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:07:41 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 31 08:07:41 compute-0 systemd[1]: session-46.scope: Consumed 51.928s CPU time.
Jan 31 08:07:41 compute-0 systemd-logind[810]: Session 46 logged out. Waiting for processes to exit.
Jan 31 08:07:41 compute-0 systemd-logind[810]: Removed session 46.
Jan 31 08:07:42 compute-0 ceph-mon[75294]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:43 compute-0 ceph-mon[75294]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:46 compute-0 ceph-mon[75294]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:46 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 31 08:07:46 compute-0 systemd[145779]: Activating special unit Exit the Session...
Jan 31 08:07:46 compute-0 systemd[145779]: Stopped target Main User Target.
Jan 31 08:07:46 compute-0 systemd[145779]: Stopped target Basic System.
Jan 31 08:07:46 compute-0 systemd[145779]: Stopped target Paths.
Jan 31 08:07:46 compute-0 systemd[145779]: Stopped target Sockets.
Jan 31 08:07:46 compute-0 systemd[145779]: Stopped target Timers.
Jan 31 08:07:46 compute-0 systemd[145779]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:07:46 compute-0 systemd[145779]: Closed D-Bus User Message Bus Socket.
Jan 31 08:07:46 compute-0 systemd[145779]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:07:46 compute-0 systemd[145779]: Removed slice User Application Slice.
Jan 31 08:07:46 compute-0 systemd[145779]: Reached target Shutdown.
Jan 31 08:07:46 compute-0 systemd[145779]: Finished Exit the Session.
Jan 31 08:07:46 compute-0 systemd[145779]: Reached target Exit the Session.
Jan 31 08:07:46 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 08:07:46 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 31 08:07:46 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 08:07:46 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 08:07:46 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 08:07:46 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 08:07:46 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 08:07:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:47 compute-0 ceph-mon[75294]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:48 compute-0 sshd-session[146764]: Accepted publickey for zuul from 192.168.122.30 port 59352 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:07:48 compute-0 systemd-logind[810]: New session 48 of user zuul.
Jan 31 08:07:48 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 31 08:07:48 compute-0 sshd-session[146764]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:07:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:49 compute-0 python3.9[146917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:07:50 compute-0 ceph-mon[75294]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:50 compute-0 sudo[147004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:50 compute-0 sudo[147004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:50 compute-0 sudo[147004]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 sudo[147047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:07:50 compute-0 sudo[147047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:50 compute-0 sudo[147121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnmtvtnlvxysnbggqtgkerzvvwvxwhob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846870.2279997-29-146449193717053/AnsiballZ_file.py'
Jan 31 08:07:50 compute-0 sudo[147121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:07:50
Jan 31 08:07:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:07:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:07:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms']
Jan 31 08:07:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:07:50 compute-0 python3.9[147123]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:50 compute-0 sudo[147047]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 sudo[147121]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:50 compute-0 sudo[147168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:50 compute-0 sudo[147168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:50 compute-0 sudo[147168]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:50 compute-0 sudo[147198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:07:50 compute-0 sudo[147198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:51 compute-0 sudo[147344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lagcwynugzxxcrwdbgksiigaeaizjeuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846870.9642763-29-93886115503375/AnsiballZ_file.py'
Jan 31 08:07:51 compute-0 sudo[147344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:51 compute-0 python3.9[147348]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:51 compute-0 sudo[147344]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:51 compute-0 sudo[147198]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:51 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:07:51 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:07:51 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:07:51 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:07:51 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:51 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:51 compute-0 sudo[147418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:51 compute-0 sudo[147418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:51 compute-0 sudo[147418]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:51 compute-0 sudo[147472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:07:51 compute-0 sudo[147472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:51 compute-0 sudo[147576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmgzmsjiwgwytcpfejakktqsavhuvxdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846871.5176365-29-80718862167664/AnsiballZ_file.py'
Jan 31 08:07:51 compute-0 sudo[147576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:51 compute-0 ceph-mon[75294]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:51 compute-0 python3.9[147578]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:51 compute-0 sudo[147576]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:51 compute-0 podman[147594]: 2026-01-31 08:07:51.93437806 +0000 UTC m=+0.095045577 container create 4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:51 compute-0 podman[147594]: 2026-01-31 08:07:51.861873279 +0000 UTC m=+0.022540826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:51 compute-0 systemd[1]: Started libpod-conmon-4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf.scope.
Jan 31 08:07:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:52 compute-0 podman[147594]: 2026-01-31 08:07:52.042563213 +0000 UTC m=+0.203230750 container init 4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:52 compute-0 podman[147594]: 2026-01-31 08:07:52.049476547 +0000 UTC m=+0.210144064 container start 4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:07:52 compute-0 podman[147594]: 2026-01-31 08:07:52.053784114 +0000 UTC m=+0.214451651 container attach 4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:07:52 compute-0 busy_pascal[147635]: 167 167
Jan 31 08:07:52 compute-0 systemd[1]: libpod-4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[147594]: 2026-01-31 08:07:52.055481214 +0000 UTC m=+0.216148731 container died 4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3f2c734344b7f2c5cb244e3785f4268cc01364019cf3734e3f4702547a8d7cd-merged.mount: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[147594]: 2026-01-31 08:07:52.107708066 +0000 UTC m=+0.268375583 container remove 4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:52 compute-0 systemd[1]: libpod-conmon-4446c9095250b40342240c577a28f27f3cb0f58303b3698193543831beebedaf.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[147757]: 2026-01-31 08:07:52.265909296 +0000 UTC m=+0.041368842 container create 039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:52 compute-0 sudo[147798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zejyiossxxkrkxrpiiyvvonrcjxsksuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846872.0439954-29-208963217293059/AnsiballZ_file.py'
Jan 31 08:07:52 compute-0 sudo[147798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:52 compute-0 systemd[1]: Started libpod-conmon-039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6.scope.
Jan 31 08:07:52 compute-0 podman[147757]: 2026-01-31 08:07:52.247905185 +0000 UTC m=+0.023364751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3249114ae907af60f8c1da5bbcc262add9eaaeaf42d4d14631acd776a793e6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3249114ae907af60f8c1da5bbcc262add9eaaeaf42d4d14631acd776a793e6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3249114ae907af60f8c1da5bbcc262add9eaaeaf42d4d14631acd776a793e6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3249114ae907af60f8c1da5bbcc262add9eaaeaf42d4d14631acd776a793e6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3249114ae907af60f8c1da5bbcc262add9eaaeaf42d4d14631acd776a793e6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 podman[147757]: 2026-01-31 08:07:52.377925132 +0000 UTC m=+0.153384708 container init 039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:52 compute-0 podman[147757]: 2026-01-31 08:07:52.383454806 +0000 UTC m=+0.158914352 container start 039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:52 compute-0 podman[147757]: 2026-01-31 08:07:52.38903142 +0000 UTC m=+0.164490996 container attach 039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:52 compute-0 python3.9[147802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:52 compute-0 sudo[147798]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:52 compute-0 sudo[147969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obozumkrphpplolblblijyiqcqkqmkzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846872.5787425-29-249117600479194/AnsiballZ_file.py'
Jan 31 08:07:52 compute-0 sudo[147969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:52 compute-0 recursing_perlman[147803]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:07:52 compute-0 recursing_perlman[147803]: --> All data devices are unavailable
Jan 31 08:07:52 compute-0 systemd[1]: libpod-039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[147757]: 2026-01-31 08:07:52.834992295 +0000 UTC m=+0.610451851 container died 039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3249114ae907af60f8c1da5bbcc262add9eaaeaf42d4d14631acd776a793e6f-merged.mount: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[147757]: 2026-01-31 08:07:52.926468115 +0000 UTC m=+0.701927661 container remove 039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:07:52 compute-0 systemd[1]: libpod-conmon-039e679a55cbddab028f5a3b1729b5676c6b8688170727ea2e6d6b7e4762f8d6.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 sudo[147472]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:52 compute-0 python3.9[147972]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:53 compute-0 sudo[147987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:53 compute-0 sudo[147987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:53 compute-0 sudo[147987]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:53 compute-0 sudo[147969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:53 compute-0 sudo[148012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:07:53 compute-0 sudo[148012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:53 compute-0 podman[148125]: 2026-01-31 08:07:53.290476031 +0000 UTC m=+0.041683932 container create 4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:53 compute-0 systemd[1]: Started libpod-conmon-4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759.scope.
Jan 31 08:07:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:53 compute-0 podman[148125]: 2026-01-31 08:07:53.267577845 +0000 UTC m=+0.018785756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:53 compute-0 podman[148125]: 2026-01-31 08:07:53.413880414 +0000 UTC m=+0.165088335 container init 4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:07:53 compute-0 podman[148125]: 2026-01-31 08:07:53.419208921 +0000 UTC m=+0.170416812 container start 4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ganguly, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:53 compute-0 infallible_ganguly[148166]: 167 167
Jan 31 08:07:53 compute-0 systemd[1]: libpod-4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759.scope: Deactivated successfully.
Jan 31 08:07:53 compute-0 conmon[148166]: conmon 4bfba288de497dc3143e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759.scope/container/memory.events
Jan 31 08:07:53 compute-0 podman[148125]: 2026-01-31 08:07:53.426212448 +0000 UTC m=+0.177420359 container attach 4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:53 compute-0 podman[148125]: 2026-01-31 08:07:53.426526577 +0000 UTC m=+0.177734468 container died 4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1a8ea648cfba73ee6b3d5dae114ba3a865257139d77dd828c051542d104a206-merged.mount: Deactivated successfully.
Jan 31 08:07:53 compute-0 podman[148125]: 2026-01-31 08:07:53.477606405 +0000 UTC m=+0.228814296 container remove 4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:07:53 compute-0 systemd[1]: libpod-conmon-4bfba288de497dc3143e4b1efc2ca3b1d826761fbd65ba5564e52dc9118e6759.scope: Deactivated successfully.
Jan 31 08:07:53 compute-0 podman[148240]: 2026-01-31 08:07:53.610481658 +0000 UTC m=+0.040213758 container create 1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:07:53 compute-0 systemd[1]: Started libpod-conmon-1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b.scope.
Jan 31 08:07:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b3c0814f781124e742e935ac115085fa45c778e0b0a310fde07ce6b4dce9ce0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b3c0814f781124e742e935ac115085fa45c778e0b0a310fde07ce6b4dce9ce0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b3c0814f781124e742e935ac115085fa45c778e0b0a310fde07ce6b4dce9ce0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b3c0814f781124e742e935ac115085fa45c778e0b0a310fde07ce6b4dce9ce0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 podman[148240]: 2026-01-31 08:07:53.590950581 +0000 UTC m=+0.020682711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:53 compute-0 podman[148240]: 2026-01-31 08:07:53.698026372 +0000 UTC m=+0.127758502 container init 1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:07:53 compute-0 podman[148240]: 2026-01-31 08:07:53.703089831 +0000 UTC m=+0.132821931 container start 1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_almeida, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:53 compute-0 podman[148240]: 2026-01-31 08:07:53.720706771 +0000 UTC m=+0.150438911 container attach 1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_almeida, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:53 compute-0 python3.9[148221]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]: {
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:     "0": [
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:         {
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "devices": [
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "/dev/loop3"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             ],
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_name": "ceph_lv0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_size": "21470642176",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "name": "ceph_lv0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "tags": {
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.crush_device_class": "",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.encrypted": "0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osd_id": "0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.type": "block",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.vdo": "0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.with_tpm": "0"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             },
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "type": "block",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "vg_name": "ceph_vg0"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:         }
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:     ],
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:     "1": [
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:         {
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "devices": [
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "/dev/loop4"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             ],
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_name": "ceph_lv1",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_size": "21470642176",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "name": "ceph_lv1",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "tags": {
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.crush_device_class": "",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.encrypted": "0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osd_id": "1",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.type": "block",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.vdo": "0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.with_tpm": "0"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             },
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "type": "block",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "vg_name": "ceph_vg1"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:         }
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:     ],
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:     "2": [
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:         {
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "devices": [
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "/dev/loop5"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             ],
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_name": "ceph_lv2",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_size": "21470642176",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "name": "ceph_lv2",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "tags": {
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.crush_device_class": "",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.encrypted": "0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osd_id": "2",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.type": "block",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.vdo": "0",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:                 "ceph.with_tpm": "0"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             },
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "type": "block",
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:             "vg_name": "ceph_vg2"
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:         }
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]:     ]
Jan 31 08:07:53 compute-0 vigilant_almeida[148257]: }
Jan 31 08:07:53 compute-0 systemd[1]: libpod-1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b.scope: Deactivated successfully.
Jan 31 08:07:53 compute-0 podman[148240]: 2026-01-31 08:07:53.961753117 +0000 UTC m=+0.391485217 container died 1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b3c0814f781124e742e935ac115085fa45c778e0b0a310fde07ce6b4dce9ce0-merged.mount: Deactivated successfully.
Jan 31 08:07:54 compute-0 ceph-mon[75294]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:54 compute-0 podman[148240]: 2026-01-31 08:07:54.07975352 +0000 UTC m=+0.509485660 container remove 1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:54 compute-0 systemd[1]: libpod-conmon-1b6ec89750876b79eaa3ca4451240ec012cbecc11d92e94cf2b3fbeec365193b.scope: Deactivated successfully.
Jan 31 08:07:54 compute-0 sudo[148012]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 sudo[148367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:54 compute-0 sudo[148367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 sudo[148367]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 sudo[148405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:07:54 compute-0 sudo[148405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 sudo[148477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysvdiamdxstwhaxnsvkwkysfwjtyxhgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846873.8732703-73-231216985070726/AnsiballZ_seboolean.py'
Jan 31 08:07:54 compute-0 sudo[148477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:54 compute-0 python3.9[148479]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 08:07:54 compute-0 podman[148494]: 2026-01-31 08:07:54.515574635 +0000 UTC m=+0.031581982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:07:55 compute-0 ceph-mon[75294]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:07:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:56 compute-0 podman[148494]: 2026-01-31 08:07:56.273506369 +0000 UTC m=+1.789513696 container create 2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_babbage, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:07:56 compute-0 systemd[1]: Started libpod-conmon-2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7.scope.
Jan 31 08:07:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:57 compute-0 podman[148494]: 2026-01-31 08:07:57.221083961 +0000 UTC m=+2.737091318 container init 2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:07:57 compute-0 podman[148494]: 2026-01-31 08:07:57.226820391 +0000 UTC m=+2.742827718 container start 2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:57 compute-0 unruffled_babbage[148512]: 167 167
Jan 31 08:07:57 compute-0 systemd[1]: libpod-2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7.scope: Deactivated successfully.
Jan 31 08:07:57 compute-0 ceph-mon[75294]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:57 compute-0 podman[148494]: 2026-01-31 08:07:57.332825869 +0000 UTC m=+2.848833196 container attach 2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_babbage, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:57 compute-0 podman[148494]: 2026-01-31 08:07:57.333169639 +0000 UTC m=+2.849176966 container died 2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_babbage, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:57 compute-0 sudo[148477]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-78b3586f910512a84b1847a6d253f93e7519baaf1a3c9aa60ea43cf412ed5b4c-merged.mount: Deactivated successfully.
Jan 31 08:07:57 compute-0 podman[148494]: 2026-01-31 08:07:57.533417322 +0000 UTC m=+3.049424649 container remove 2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:57 compute-0 systemd[1]: libpod-conmon-2ae39ee0e6aec9cee38aacc4ec52f97716a92124839d3f128a1709005ad4cac7.scope: Deactivated successfully.
Jan 31 08:07:57 compute-0 podman[148615]: 2026-01-31 08:07:57.657107642 +0000 UTC m=+0.049068649 container create 18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:07:57 compute-0 systemd[1]: Started libpod-conmon-18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54.scope.
Jan 31 08:07:57 compute-0 podman[148615]: 2026-01-31 08:07:57.62753781 +0000 UTC m=+0.019498837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315a58007a48e3ed3f6354acf0757cdbf03b939b747e721b277a9fc697731563/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315a58007a48e3ed3f6354acf0757cdbf03b939b747e721b277a9fc697731563/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315a58007a48e3ed3f6354acf0757cdbf03b939b747e721b277a9fc697731563/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315a58007a48e3ed3f6354acf0757cdbf03b939b747e721b277a9fc697731563/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:57 compute-0 podman[148615]: 2026-01-31 08:07:57.763847913 +0000 UTC m=+0.155808970 container init 18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:07:57 compute-0 podman[148615]: 2026-01-31 08:07:57.769984634 +0000 UTC m=+0.161945671 container start 18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:57 compute-0 podman[148615]: 2026-01-31 08:07:57.777930779 +0000 UTC m=+0.169891806 container attach 18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:07:58 compute-0 python3.9[148710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:58 compute-0 lvm[148852]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:58 compute-0 lvm[148852]: VG ceph_vg0 finished
Jan 31 08:07:58 compute-0 lvm[148855]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:07:58 compute-0 lvm[148855]: VG ceph_vg1 finished
Jan 31 08:07:58 compute-0 lvm[148861]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:58 compute-0 lvm[148861]: VG ceph_vg2 finished
Jan 31 08:07:58 compute-0 zen_brattain[148632]: {}
Jan 31 08:07:58 compute-0 systemd[1]: libpod-18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54.scope: Deactivated successfully.
Jan 31 08:07:58 compute-0 podman[148615]: 2026-01-31 08:07:58.584998034 +0000 UTC m=+0.976959051 container died 18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:07:58 compute-0 systemd[1]: libpod-18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54.scope: Consumed 1.081s CPU time.
Jan 31 08:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-315a58007a48e3ed3f6354acf0757cdbf03b939b747e721b277a9fc697731563-merged.mount: Deactivated successfully.
Jan 31 08:07:58 compute-0 podman[148615]: 2026-01-31 08:07:58.683947774 +0000 UTC m=+1.075908781 container remove 18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_brattain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:07:58 compute-0 systemd[1]: libpod-conmon-18e53bb236229c8be8d5c733cfccf8f1aa979aa2a36d56cef657c96f303deb54.scope: Deactivated successfully.
Jan 31 08:07:58 compute-0 python3.9[148909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846877.527488-81-201811586230301/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:58 compute-0 sudo[148405]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:58 compute-0 sudo[148933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:07:58 compute-0 sudo[148933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:58 compute-0 sudo[148933]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:59 compute-0 python3.9[149099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:07:59 compute-0 python3.9[149220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846878.8866737-96-211793664281138/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:07:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:07:59 compute-0 ceph-mon[75294]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:00 compute-0 sudo[149370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gplsdiceiuycwnqirafddolcygkoxxiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846880.0136504-113-41270160728205/AnsiballZ_setup.py'
Jan 31 08:08:00 compute-0 sudo[149370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:00 compute-0 python3.9[149372]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:08:00 compute-0 sudo[149370]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:01 compute-0 sudo[149455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feaaijrskohsebdomsowtmhfdhoekcey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846880.0136504-113-41270160728205/AnsiballZ_dnf.py'
Jan 31 08:08:01 compute-0 sudo[149455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:01 compute-0 python3.9[149457]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:08:01 compute-0 ceph-mon[75294]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:02 compute-0 sudo[149455]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:03 compute-0 ceph-mon[75294]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:03 compute-0 sudo[149608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duudhrhlahxzdbafrahldohofeaijflb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846882.8779488-125-239866490521684/AnsiballZ_systemd.py'
Jan 31 08:08:03 compute-0 sudo[149608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:03 compute-0 python3.9[149610]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:08:03 compute-0 sudo[149608]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:04 compute-0 python3.9[149763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:04 compute-0 python3.9[149884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846883.9107044-133-179774043997329/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:05 compute-0 python3.9[150034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:05 compute-0 python3.9[150155]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846884.948245-133-255757404425182/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:06 compute-0 ceph-mon[75294]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:08:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:07 compute-0 ovn_controller[145740]: 2026-01-31T08:08:07Z|00025|memory|INFO|16896 kB peak resident set size after 31.2 seconds
Jan 31 08:08:07 compute-0 ovn_controller[145740]: 2026-01-31T08:08:07Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 31 08:08:07 compute-0 podman[150279]: 2026-01-31 08:08:07.579336324 +0000 UTC m=+0.103207242 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:08:07 compute-0 python3.9[150312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:08 compute-0 python3.9[150453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846886.4743376-177-141352209668303/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:08 compute-0 ceph-mon[75294]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:08 compute-0 python3.9[150603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:09 compute-0 python3.9[150724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846888.218586-177-102004399450685/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:09 compute-0 python3.9[150874]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:08:09 compute-0 sudo[151026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlhgscvflepquagazglowyskrnmzlybm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846889.7336652-215-166147497621808/AnsiballZ_file.py'
Jan 31 08:08:09 compute-0 sudo[151026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:10 compute-0 python3.9[151028]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:10 compute-0 sudo[151026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:10 compute-0 ceph-mon[75294]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:10 compute-0 sudo[151178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofbmvuxbttqrtcxeaqebnzqycgqkgfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846890.3333993-223-107173752830585/AnsiballZ_stat.py'
Jan 31 08:08:10 compute-0 sudo[151178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:10 compute-0 python3.9[151180]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:10 compute-0 sudo[151178]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:10 compute-0 sudo[151256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abmddmqjgnkuznsjvwkosqvszadqxdwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846890.3333993-223-107173752830585/AnsiballZ_file.py'
Jan 31 08:08:10 compute-0 sudo[151256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:11 compute-0 python3.9[151258]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:11 compute-0 sudo[151256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:11 compute-0 sudo[151408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxllzvghsnhaiggmtjidfvchvckmueav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846891.314593-223-214108221334748/AnsiballZ_stat.py'
Jan 31 08:08:11 compute-0 sudo[151408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:11 compute-0 python3.9[151410]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:11 compute-0 sudo[151408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:11 compute-0 sudo[151486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqxqnyqodceiezpoxtdtcurzqxqcqkib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846891.314593-223-214108221334748/AnsiballZ_file.py'
Jan 31 08:08:11 compute-0 sudo[151486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:12 compute-0 python3.9[151488]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:12 compute-0 sudo[151486]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:12 compute-0 sudo[151638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgzrmfsykskzfcwljzmtuepcugpvjgrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846892.2968276-246-97584115989971/AnsiballZ_file.py'
Jan 31 08:08:12 compute-0 sudo[151638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:12 compute-0 ceph-mon[75294]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:12 compute-0 python3.9[151640]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:12 compute-0 sudo[151638]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:13 compute-0 sudo[151790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-holzskrlmtujvbdbntiqkzwqxsweeoqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846892.9224699-254-268243580787237/AnsiballZ_stat.py'
Jan 31 08:08:13 compute-0 sudo[151790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:13 compute-0 python3.9[151792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:13 compute-0 sudo[151790]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:13 compute-0 sudo[151868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnahohnarubakeznqqbfjndwuqfstadi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846892.9224699-254-268243580787237/AnsiballZ_file.py'
Jan 31 08:08:13 compute-0 sudo[151868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:13 compute-0 python3.9[151870]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:13 compute-0 sudo[151868]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:14 compute-0 sudo[152020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqrafqvniqlmndrnoinftxxmmdnygkgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846893.8935664-266-115886015644458/AnsiballZ_stat.py'
Jan 31 08:08:14 compute-0 sudo[152020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:14 compute-0 python3.9[152022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:14 compute-0 sudo[152020]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:14 compute-0 ceph-mon[75294]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:14 compute-0 sudo[152098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eopqmzvrujtwudfmvcgyclsptqitbthj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846893.8935664-266-115886015644458/AnsiballZ_file.py'
Jan 31 08:08:14 compute-0 sudo[152098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:14 compute-0 python3.9[152100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:14 compute-0 sudo[152098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:15 compute-0 sudo[152250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oagslnstkskbybrlseycofftrozjxoij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846894.9522588-278-85442236285149/AnsiballZ_systemd.py'
Jan 31 08:08:15 compute-0 sudo[152250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:15 compute-0 python3.9[152252]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:08:15 compute-0 systemd[1]: Reloading.
Jan 31 08:08:15 compute-0 systemd-rc-local-generator[152280]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:15 compute-0 systemd-sysv-generator[152283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:15 compute-0 sudo[152250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:16 compute-0 sudo[152439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elythhgpjfzxnfhowmpvsthecruqgfrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846895.983149-286-57059313285465/AnsiballZ_stat.py'
Jan 31 08:08:16 compute-0 sudo[152439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:16 compute-0 python3.9[152441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:16 compute-0 sudo[152439]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:16 compute-0 ceph-mon[75294]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:16 compute-0 sudo[152517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqgeujzaaqdkejocqikrgaxmdieevlog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846895.983149-286-57059313285465/AnsiballZ_file.py'
Jan 31 08:08:16 compute-0 sudo[152517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:16 compute-0 python3.9[152519]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:16 compute-0 sudo[152517]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:17 compute-0 sudo[152669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpcyuioyzopwgcloliaoowsiqspycde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846897.057532-298-52392485979862/AnsiballZ_stat.py'
Jan 31 08:08:17 compute-0 sudo[152669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:17 compute-0 python3.9[152671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:17 compute-0 sudo[152669]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:17 compute-0 sudo[152747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvdlxbgtcfxfxyjatcjrmpnvtkhyptsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846897.057532-298-52392485979862/AnsiballZ_file.py'
Jan 31 08:08:17 compute-0 sudo[152747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:17 compute-0 python3.9[152749]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:17 compute-0 sudo[152747]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:18 compute-0 sudo[152899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdumskizqckmgctmthodkdkjbvzibnue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846898.0630705-310-85428129354591/AnsiballZ_systemd.py'
Jan 31 08:08:18 compute-0 sudo[152899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:18 compute-0 ceph-mon[75294]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:18 compute-0 python3.9[152901]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:08:18 compute-0 systemd[1]: Reloading.
Jan 31 08:08:18 compute-0 systemd-rc-local-generator[152924]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:18 compute-0 systemd-sysv-generator[152928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:19 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 08:08:19 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 08:08:19 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 08:08:19 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 08:08:19 compute-0 sudo[152899]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:19 compute-0 sudo[153092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfpkswpvdbqafhqtknnaviwlyxtzeyaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846899.2847292-320-32989192097533/AnsiballZ_file.py'
Jan 31 08:08:19 compute-0 sudo[153092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:19 compute-0 python3.9[153094]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:19 compute-0 sudo[153092]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:20 compute-0 sudo[153244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfhxgnczbncmicwezspphiegbaiwbhlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846899.8837097-328-240401662892387/AnsiballZ_stat.py'
Jan 31 08:08:20 compute-0 sudo[153244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:20 compute-0 python3.9[153246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:20 compute-0 sudo[153244]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:20 compute-0 sudo[153367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iezjdgvcgydckzhyitidavnnsyxnqthb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846899.8837097-328-240401662892387/AnsiballZ_copy.py'
Jan 31 08:08:20 compute-0 sudo[153367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:20 compute-0 ceph-mon[75294]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:20 compute-0 python3.9[153369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846899.8837097-328-240401662892387/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:20 compute-0 sudo[153367]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:21 compute-0 sudo[153519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubmiqzftruionxplsxvlhdwnxtmsdsrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846901.2193453-345-162464376198596/AnsiballZ_file.py'
Jan 31 08:08:21 compute-0 sudo[153519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:21 compute-0 python3.9[153521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:21 compute-0 sudo[153519]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:08:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5603 writes, 24K keys, 5603 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5603 writes, 873 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5603 writes, 24K keys, 5603 commit groups, 1.0 writes per commit group, ingest: 19.03 MB, 0.03 MB/s
                                           Interval WAL: 5603 writes, 873 syncs, 6.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:08:22 compute-0 sudo[153671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfwclnjnrxpvyclpdttlikbfwcmtirmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846901.8174806-353-182905929371668/AnsiballZ_file.py'
Jan 31 08:08:22 compute-0 sudo[153671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:22 compute-0 python3.9[153673]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:08:22 compute-0 sudo[153671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:22 compute-0 sudo[153823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aczlxvnnksaxzpjkbvukxlioedgizddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846902.4748385-361-130391525570236/AnsiballZ_stat.py'
Jan 31 08:08:22 compute-0 sudo[153823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:22 compute-0 ceph-mon[75294]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:22 compute-0 python3.9[153825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:22 compute-0 sudo[153823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:23 compute-0 sudo[153946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mduxfsegvnzypgainzalamwvnoabjozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846902.4748385-361-130391525570236/AnsiballZ_copy.py'
Jan 31 08:08:23 compute-0 sudo[153946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:23 compute-0 python3.9[153948]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846902.4748385-361-130391525570236/.source.json _original_basename=.f598gpd6 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:23 compute-0 sudo[153946]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:23 compute-0 python3.9[154098]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:24 compute-0 ceph-mon[75294]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:25 compute-0 sudo[154519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzrplyckrfpzrcndrbjfagkqjecytkie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846905.2334335-401-156251826607113/AnsiballZ_container_config_data.py'
Jan 31 08:08:25 compute-0 sudo[154519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:25 compute-0 python3.9[154521]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 08:08:25 compute-0 sudo[154519]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:08:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6990 writes, 29K keys, 6990 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6990 writes, 1347 syncs, 5.19 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6990 writes, 29K keys, 6990 commit groups, 1.0 writes per commit group, ingest: 20.14 MB, 0.03 MB/s
                                           Interval WAL: 6990 writes, 1347 syncs, 5.19 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:08:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:26 compute-0 sudo[154671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rulzbxcbsdyjozegcghrkwzvedxkchvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846906.0706654-412-249434086091987/AnsiballZ_container_config_hash.py'
Jan 31 08:08:26 compute-0 sudo[154671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:26 compute-0 python3.9[154673]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:08:26 compute-0 sudo[154671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:26 compute-0 ceph-mon[75294]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:27 compute-0 sudo[154823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnnztebhrednnpuudvjfdrlzhvozmxfp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846906.836518-422-9092546637758/AnsiballZ_edpm_container_manage.py'
Jan 31 08:08:27 compute-0 sudo[154823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:27 compute-0 python3[154825]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:08:28 compute-0 ceph-mon[75294]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:08:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 5560 writes, 24K keys, 5560 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5560 writes, 798 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5560 writes, 24K keys, 5560 commit groups, 1.0 writes per commit group, ingest: 18.83 MB, 0.03 MB/s
                                           Interval WAL: 5560 writes, 798 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:08:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:31 compute-0 ceph-mon[75294]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:32 compute-0 ceph-mon[75294]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:35 compute-0 ceph-mon[75294]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:37 compute-0 ceph-mon[75294]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:37 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Check health
Jan 31 08:08:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:37 compute-0 podman[154839]: 2026-01-31 08:08:37.896416397 +0000 UTC m=+10.266316387 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:08:38 compute-0 podman[154963]: 2026-01-31 08:08:38.014973704 +0000 UTC m=+0.025161778 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:08:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:39 compute-0 ceph-mon[75294]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:40 compute-0 podman[154963]: 2026-01-31 08:08:40.223601357 +0000 UTC m=+2.233789401 container create df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 08:08:40 compute-0 podman[154976]: 2026-01-31 08:08:40.224929854 +0000 UTC m=+2.089592403 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:08:40 compute-0 python3[154825]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:08:40 compute-0 sudo[154823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:40 compute-0 sudo[155179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blwcegrlcydzhzikhhqtbedsipvisobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846920.4936574-430-130911184807395/AnsiballZ_stat.py'
Jan 31 08:08:40 compute-0 sudo[155179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:40 compute-0 python3.9[155181]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:08:41 compute-0 sudo[155179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:41 compute-0 ceph-mon[75294]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:41 compute-0 sudo[155333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbmtzbqcqwyjzptlpsvlbtmnkoqqrirq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846921.2120202-439-217916876911033/AnsiballZ_file.py'
Jan 31 08:08:41 compute-0 sudo[155333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:41 compute-0 python3.9[155335]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:41 compute-0 sudo[155333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:41 compute-0 sudo[155409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwgkypoahqrvnjcvrnmashfdrcnyihqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846921.2120202-439-217916876911033/AnsiballZ_stat.py'
Jan 31 08:08:41 compute-0 sudo[155409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:41 compute-0 python3.9[155411]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:08:41 compute-0 sudo[155409]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:42 compute-0 sudo[155560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxsjdcdaytmtmedcgvdvbftdaeorujsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846922.0174527-439-17444127504225/AnsiballZ_copy.py'
Jan 31 08:08:42 compute-0 sudo[155560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:42 compute-0 python3.9[155562]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846922.0174527-439-17444127504225/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:42 compute-0 sudo[155560]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:42 compute-0 sudo[155636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlbtfdzujswvxbxnlfxkiddxkttbumkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846922.0174527-439-17444127504225/AnsiballZ_systemd.py'
Jan 31 08:08:42 compute-0 sudo[155636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:43 compute-0 python3.9[155638]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:08:43 compute-0 systemd[1]: Reloading.
Jan 31 08:08:43 compute-0 systemd-sysv-generator[155667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:43 compute-0 systemd-rc-local-generator[155660]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:43 compute-0 ceph-mon[75294]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:43 compute-0 sudo[155636]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:43 compute-0 sudo[155747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szsfnsjuhccgkqkgjxfjvvgqwlnsmrin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846922.0174527-439-17444127504225/AnsiballZ_systemd.py'
Jan 31 08:08:43 compute-0 sudo[155747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:43 compute-0 python3.9[155749]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:08:44 compute-0 systemd[1]: Reloading.
Jan 31 08:08:44 compute-0 systemd-rc-local-generator[155772]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:44 compute-0 systemd-sysv-generator[155777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:44 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 08:08:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab382151e56216029562ec04347fd5d8b5b9ad2cb49ea98583b4cb55a6c8217a/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab382151e56216029562ec04347fd5d8b5b9ad2cb49ea98583b4cb55a6c8217a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531.
Jan 31 08:08:44 compute-0 podman[155789]: 2026-01-31 08:08:44.534415342 +0000 UTC m=+0.177998077 container init df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + sudo -E kolla_set_configs
Jan 31 08:08:44 compute-0 podman[155789]: 2026-01-31 08:08:44.560416363 +0000 UTC m=+0.203999068 container start df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:08:44 compute-0 edpm-start-podman-container[155789]: ovn_metadata_agent
Jan 31 08:08:44 compute-0 edpm-start-podman-container[155788]: Creating additional drop-in dependency for "ovn_metadata_agent" (df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531)
Jan 31 08:08:44 compute-0 podman[155811]: 2026-01-31 08:08:44.650523601 +0000 UTC m=+0.078363693 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:08:44 compute-0 systemd[1]: Reloading.
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Validating config file
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Copying service configuration files
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Writing out command to execute
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 08:08:44 compute-0 systemd-sysv-generator[155880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:44 compute-0 systemd-rc-local-generator[155876]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: ++ cat /run_command
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + CMD=neutron-ovn-metadata-agent
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + ARGS=
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + sudo kolla_copy_cacerts
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + [[ ! -n '' ]]
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + . kolla_extend_start
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + umask 0022
Jan 31 08:08:44 compute-0 ovn_metadata_agent[155805]: + exec neutron-ovn-metadata-agent
Jan 31 08:08:44 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 31 08:08:45 compute-0 sudo[155747]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:45 compute-0 ceph-mon[75294]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:45 compute-0 python3.9[156040]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 08:08:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:46 compute-0 sudo[156191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dotsykpcldrkeoxidieyisjbfcomlrxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846926.1197767-484-156558453820000/AnsiballZ_stat.py'
Jan 31 08:08:46 compute-0 sudo[156191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:46 compute-0 python3.9[156193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:08:46 compute-0 ceph-mon[75294]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:46 compute-0 sudo[156191]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:46 compute-0 sudo[156316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjhuyobxynblcbuemcqzkumeiiomlhey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846926.1197767-484-156558453820000/AnsiballZ_copy.py'
Jan 31 08:08:46 compute-0 sudo[156316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.916 155810 INFO neutron.common.config [-] Logging enabled!
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.916 155810 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.916 155810 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.917 155810 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.918 155810 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.919 155810 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.920 155810 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.921 155810 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.922 155810 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.923 155810 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.924 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.925 155810 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.926 155810 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.927 155810 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.928 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.929 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.930 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.931 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.932 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.933 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.934 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.935 155810 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.936 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.937 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.938 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.939 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.940 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.941 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.942 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.943 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.944 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.945 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.946 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.947 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.948 155810 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.957 155810 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.957 155810 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.957 155810 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.957 155810 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.958 155810 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.970 155810 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 41f56c18-6e96-48c3-b4a0-6aca47eb55b4 (UUID: 41f56c18-6e96-48c3-b4a0-6aca47eb55b4) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.992 155810 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.992 155810 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.992 155810 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.992 155810 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:08:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:46.995 155810 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.001 155810 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.005 155810 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '41f56c18-6e96-48c3-b4a0-6aca47eb55b4'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe9c294aa00>], external_ids={}, name=41f56c18-6e96-48c3-b4a0-6aca47eb55b4, nb_cfg_timestamp=1769846864411, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.006 155810 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe9c28f2c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.007 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.007 155810 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.007 155810 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.007 155810 INFO oslo_service.service [-] Starting 1 workers
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.010 155810 DEBUG oslo_service.service [-] Started child 156319 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.012 156319 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-236521'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.013 155810 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp9_zplg1u/privsep.sock']
Jan 31 08:08:47 compute-0 python3.9[156318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846926.1197767-484-156558453820000/.source.yaml _original_basename=.s3tvo1in follow=False checksum=96a1608d38620c468d2172c00d7e1e849e25addf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.035 156319 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.036 156319 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.036 156319 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.039 156319 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.045 156319 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 08:08:47 compute-0 sudo[156316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.051 156319 INFO eventlet.wsgi.server [-] (156319) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 31 08:08:47 compute-0 sshd-session[146767]: Connection closed by 192.168.122.30 port 59352
Jan 31 08:08:47 compute-0 sshd-session[146764]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:08:47 compute-0 systemd-logind[810]: Session 48 logged out. Waiting for processes to exit.
Jan 31 08:08:47 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 08:08:47 compute-0 systemd[1]: session-48.scope: Consumed 49.213s CPU time.
Jan 31 08:08:47 compute-0 systemd-logind[810]: Removed session 48.
Jan 31 08:08:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:47 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.598 155810 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.599 155810 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9_zplg1u/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.480 156348 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.512 156348 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.516 156348 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.517 156348 INFO oslo.privsep.daemon [-] privsep daemon running as pid 156348
Jan 31 08:08:47 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:47.602 156348 DEBUG oslo.privsep.daemon [-] privsep: reply[d16cfadb-ad3d-4fde-807c-fed84d2701e4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.037 156348 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.037 156348 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.038 156348 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.509 156348 DEBUG oslo.privsep.daemon [-] privsep: reply[cae1ab5e-9d7c-4a8d-9832-14259a6c730b]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.514 155810 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=41f56c18-6e96-48c3-b4a0-6aca47eb55b4, column=external_ids, values=({'neutron:ovn-metadata-id': '6503949b-8fe3-54b6-94b3-e25b9c62a208'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.527 155810 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41f56c18-6e96-48c3-b4a0-6aca47eb55b4, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.535 155810 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.535 155810 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.536 155810 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.536 155810 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.536 155810 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.536 155810 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.537 155810 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.537 155810 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.537 155810 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.537 155810 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.538 155810 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.538 155810 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.538 155810 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.538 155810 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.539 155810 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.539 155810 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.539 155810 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.540 155810 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.540 155810 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.540 155810 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.540 155810 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.540 155810 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.541 155810 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.541 155810 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.541 155810 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.542 155810 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.542 155810 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.542 155810 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.542 155810 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.543 155810 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.543 155810 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.543 155810 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.543 155810 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.544 155810 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.544 155810 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.544 155810 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.544 155810 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.545 155810 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.545 155810 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.545 155810 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.546 155810 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.546 155810 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.546 155810 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.546 155810 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.547 155810 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.547 155810 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.547 155810 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.548 155810 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.548 155810 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.549 155810 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.549 155810 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.549 155810 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.550 155810 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.550 155810 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.550 155810 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.550 155810 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.551 155810 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.551 155810 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.551 155810 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.551 155810 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.552 155810 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.552 155810 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.552 155810 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.552 155810 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.553 155810 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.553 155810 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.553 155810 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.553 155810 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.553 155810 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.554 155810 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.554 155810 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.554 155810 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.554 155810 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.555 155810 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.555 155810 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.555 155810 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.555 155810 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.556 155810 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.556 155810 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.556 155810 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.556 155810 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.557 155810 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.557 155810 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.557 155810 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.557 155810 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.557 155810 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.558 155810 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.558 155810 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.558 155810 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.558 155810 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.559 155810 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.559 155810 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.559 155810 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.559 155810 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.560 155810 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.560 155810 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.560 155810 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.560 155810 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.560 155810 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.561 155810 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.561 155810 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.561 155810 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.561 155810 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.562 155810 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.562 155810 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.562 155810 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.562 155810 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.563 155810 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.563 155810 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.563 155810 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.564 155810 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.564 155810 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.564 155810 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.564 155810 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.565 155810 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.565 155810 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.565 155810 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.565 155810 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.566 155810 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.566 155810 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.566 155810 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.566 155810 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.567 155810 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.567 155810 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.567 155810 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.567 155810 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.568 155810 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.568 155810 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.568 155810 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.569 155810 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.569 155810 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.569 155810 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.570 155810 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.570 155810 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.570 155810 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.571 155810 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.571 155810 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.572 155810 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.572 155810 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.572 155810 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.572 155810 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.573 155810 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.573 155810 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.573 155810 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.574 155810 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.574 155810 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.574 155810 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.575 155810 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.575 155810 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.575 155810 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.576 155810 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.576 155810 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.576 155810 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.576 155810 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.577 155810 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.577 155810 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.577 155810 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.578 155810 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.578 155810 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.578 155810 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.578 155810 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.579 155810 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.579 155810 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.579 155810 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.580 155810 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.580 155810 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.580 155810 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.581 155810 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.581 155810 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.581 155810 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.581 155810 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.582 155810 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.582 155810 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.582 155810 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.583 155810 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.583 155810 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.583 155810 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.583 155810 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.583 155810 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.584 155810 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.584 155810 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.584 155810 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.584 155810 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.584 155810 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.585 155810 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.585 155810 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.585 155810 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.585 155810 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.585 155810 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.586 155810 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.586 155810 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.586 155810 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.586 155810 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.586 155810 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.586 155810 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.587 155810 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.587 155810 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.587 155810 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.587 155810 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.587 155810 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.588 155810 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.588 155810 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.588 155810 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.588 155810 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.588 155810 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.588 155810 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.589 155810 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.589 155810 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.589 155810 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.589 155810 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.589 155810 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.590 155810 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.590 155810 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.590 155810 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.590 155810 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.590 155810 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.591 155810 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.591 155810 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.591 155810 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.591 155810 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.591 155810 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.591 155810 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.592 155810 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.592 155810 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.592 155810 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.592 155810 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.592 155810 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.593 155810 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.593 155810 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.593 155810 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.593 155810 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.593 155810 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.593 155810 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.594 155810 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.594 155810 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.594 155810 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.594 155810 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.594 155810 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.595 155810 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.595 155810 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.595 155810 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.595 155810 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.595 155810 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.596 155810 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.596 155810 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.596 155810 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.596 155810 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.596 155810 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.596 155810 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.597 155810 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.597 155810 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.597 155810 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.597 155810 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.597 155810 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.598 155810 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.598 155810 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.598 155810 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.598 155810 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.598 155810 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.599 155810 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.599 155810 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.599 155810 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.599 155810 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.599 155810 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.599 155810 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.600 155810 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.600 155810 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.600 155810 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.600 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.600 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.601 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.601 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.601 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.601 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.601 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.602 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.602 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.602 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.602 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.602 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.603 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.603 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.603 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.603 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.603 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.604 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.604 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.604 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.604 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.604 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.604 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.605 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.605 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.605 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.605 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.605 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.606 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.606 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.606 155810 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.607 155810 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.607 155810 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.607 155810 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.607 155810 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:08:48 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:08:48.607 155810 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 08:08:48 compute-0 ceph-mon[75294]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:08:50
Jan 31 08:08:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:08:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:08:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Jan 31 08:08:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:08:50 compute-0 ceph-mon[75294]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:52 compute-0 ceph-mon[75294]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:53 compute-0 sshd-session[156355]: Accepted publickey for zuul from 192.168.122.30 port 52322 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:08:53 compute-0 systemd-logind[810]: New session 49 of user zuul.
Jan 31 08:08:53 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 31 08:08:53 compute-0 sshd-session[156355]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:08:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:53 compute-0 sshd-session[156353]: Invalid user node from 193.32.162.145 port 59606
Jan 31 08:08:53 compute-0 sshd-session[156353]: Connection closed by invalid user node 193.32.162.145 port 59606 [preauth]
Jan 31 08:08:54 compute-0 python3.9[156508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:08:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:54 compute-0 ceph-mon[75294]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:54 compute-0 sudo[156662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkdcijxvoxollqkeesuwqmpacuxvqct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846934.508836-29-67232984837334/AnsiballZ_command.py'
Jan 31 08:08:54 compute-0 sudo[156662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:55 compute-0 python3.9[156664]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:55 compute-0 sudo[156662]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:08:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:08:55 compute-0 sudo[156827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioplpreidhzhpzoigqvtgwwplyzpeccv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846935.3782535-40-239229865866505/AnsiballZ_systemd_service.py'
Jan 31 08:08:55 compute-0 sudo[156827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:56 compute-0 python3.9[156829]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:08:56 compute-0 systemd[1]: Reloading.
Jan 31 08:08:56 compute-0 systemd-rc-local-generator[156848]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:56 compute-0 systemd-sysv-generator[156857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:56 compute-0 sudo[156827]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:56 compute-0 ceph-mon[75294]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:57 compute-0 python3.9[157013]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:08:57 compute-0 network[157030]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:08:57 compute-0 network[157031]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:08:57 compute-0 network[157032]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:08:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:58 compute-0 sudo[157100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:58 compute-0 sudo[157100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:58 compute-0 sudo[157100]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:58 compute-0 sudo[157125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:08:58 compute-0 sudo[157125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:59 compute-0 sudo[157125]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:08:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:08:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:08:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:08:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:08:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:59 compute-0 sudo[157182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:59 compute-0 sudo[157182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:59 compute-0 sudo[157182]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:59 compute-0 sudo[157207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:08:59 compute-0 sudo[157207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:59 compute-0 podman[157253]: 2026-01-31 08:08:59.896764022 +0000 UTC m=+0.060427376 container create 57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:08:59 compute-0 systemd[1]: Started libpod-conmon-57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a.scope.
Jan 31 08:08:59 compute-0 podman[157253]: 2026-01-31 08:08:59.854190542 +0000 UTC m=+0.017853916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:00 compute-0 podman[157253]: 2026-01-31 08:09:00.003974056 +0000 UTC m=+0.167637430 container init 57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_goodall, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:09:00 compute-0 podman[157253]: 2026-01-31 08:09:00.011104083 +0000 UTC m=+0.174767427 container start 57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_goodall, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:09:00 compute-0 podman[157253]: 2026-01-31 08:09:00.031634582 +0000 UTC m=+0.195297926 container attach 57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:09:00 compute-0 hardcore_goodall[157277]: 167 167
Jan 31 08:09:00 compute-0 systemd[1]: libpod-57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a.scope: Deactivated successfully.
Jan 31 08:09:00 compute-0 conmon[157277]: conmon 57accc490b197854da4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a.scope/container/memory.events
Jan 31 08:09:00 compute-0 podman[157253]: 2026-01-31 08:09:00.048175641 +0000 UTC m=+0.211838995 container died 57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b988386af34edc1e8db1f276e4f84e5d68173c06365d918cdf2e43acb8f3c0-merged.mount: Deactivated successfully.
Jan 31 08:09:00 compute-0 podman[157253]: 2026-01-31 08:09:00.09790045 +0000 UTC m=+0.261563804 container remove 57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_goodall, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:00 compute-0 systemd[1]: libpod-conmon-57accc490b197854da4dc7488833b518545679c1f656b166418d11ce244fa95a.scope: Deactivated successfully.
Jan 31 08:09:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:09:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:09:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:09:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:09:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:09:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:09:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:09:00 compute-0 podman[157308]: 2026-01-31 08:09:00.229409777 +0000 UTC m=+0.043296372 container create 7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nash, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:09:00 compute-0 systemd[1]: Started libpod-conmon-7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee.scope.
Jan 31 08:09:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd112e794653d75936ede31a8674bba5c6697f424fbd8ccc02bcb465e0ef7708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd112e794653d75936ede31a8674bba5c6697f424fbd8ccc02bcb465e0ef7708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd112e794653d75936ede31a8674bba5c6697f424fbd8ccc02bcb465e0ef7708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd112e794653d75936ede31a8674bba5c6697f424fbd8ccc02bcb465e0ef7708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd112e794653d75936ede31a8674bba5c6697f424fbd8ccc02bcb465e0ef7708/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:00 compute-0 podman[157308]: 2026-01-31 08:09:00.208163168 +0000 UTC m=+0.022049773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:00 compute-0 podman[157308]: 2026-01-31 08:09:00.318672591 +0000 UTC m=+0.132559226 container init 7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:09:00 compute-0 podman[157308]: 2026-01-31 08:09:00.325322686 +0000 UTC m=+0.139209271 container start 7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:00 compute-0 podman[157308]: 2026-01-31 08:09:00.332724421 +0000 UTC m=+0.146611066 container attach 7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:09:00 compute-0 awesome_nash[157324]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:09:00 compute-0 awesome_nash[157324]: --> All data devices are unavailable
Jan 31 08:09:00 compute-0 systemd[1]: libpod-7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee.scope: Deactivated successfully.
Jan 31 08:09:00 compute-0 podman[157308]: 2026-01-31 08:09:00.87724426 +0000 UTC m=+0.691130855 container died 7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd112e794653d75936ede31a8674bba5c6697f424fbd8ccc02bcb465e0ef7708-merged.mount: Deactivated successfully.
Jan 31 08:09:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:01 compute-0 podman[157308]: 2026-01-31 08:09:01.050538155 +0000 UTC m=+0.864424780 container remove 7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:01 compute-0 systemd[1]: libpod-conmon-7cb8d0c1218c58ae56a09d9e21eb41c5a905f0d34337edf06efd02f117dd93ee.scope: Deactivated successfully.
Jan 31 08:09:01 compute-0 sudo[157207]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:01 compute-0 ceph-mon[75294]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:01 compute-0 sudo[157363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:01 compute-0 sudo[157363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:01 compute-0 sudo[157363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:01 compute-0 sudo[157392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:09:01 compute-0 sudo[157392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:01 compute-0 podman[157460]: 2026-01-31 08:09:01.525618728 +0000 UTC m=+0.071394120 container create 660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haslett, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:09:01 compute-0 podman[157460]: 2026-01-31 08:09:01.475564691 +0000 UTC m=+0.021340093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:01 compute-0 systemd[1]: Started libpod-conmon-660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a.scope.
Jan 31 08:09:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:01 compute-0 podman[157460]: 2026-01-31 08:09:01.739322425 +0000 UTC m=+0.285097827 container init 660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haslett, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:01 compute-0 podman[157460]: 2026-01-31 08:09:01.744387885 +0000 UTC m=+0.290163267 container start 660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haslett, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:09:01 compute-0 cool_haslett[157476]: 167 167
Jan 31 08:09:01 compute-0 systemd[1]: libpod-660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a.scope: Deactivated successfully.
Jan 31 08:09:01 compute-0 podman[157460]: 2026-01-31 08:09:01.815856417 +0000 UTC m=+0.361631889 container attach 660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haslett, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:09:01 compute-0 podman[157460]: 2026-01-31 08:09:01.816886165 +0000 UTC m=+0.362661587 container died 660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haslett, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:09:01 compute-0 sudo[157618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjwvnwfyyjdasegwukhehkqhnxjriitu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846941.6345222-59-92857255996715/AnsiballZ_systemd_service.py'
Jan 31 08:09:01 compute-0 sudo[157618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0754beea5927222b532d6a87846dbf590565e9b8bc69c2c1b80e8786352e4e65-merged.mount: Deactivated successfully.
Jan 31 08:09:01 compute-0 podman[157460]: 2026-01-31 08:09:01.911092378 +0000 UTC m=+0.456867800 container remove 660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haslett, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:01 compute-0 systemd[1]: libpod-conmon-660967e07751e715e099b70bfbda55fbdf611072b94ba4739db185901109206a.scope: Deactivated successfully.
Jan 31 08:09:02 compute-0 podman[157629]: 2026-01-31 08:09:02.057560729 +0000 UTC m=+0.057450474 container create 0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_tharp, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:02 compute-0 systemd[1]: Started libpod-conmon-0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7.scope.
Jan 31 08:09:02 compute-0 podman[157629]: 2026-01-31 08:09:02.028208315 +0000 UTC m=+0.028098080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae378ac0e52d149a3316b7b3c7f4f291c6294c616525a9f4886bc3aaeb73024/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae378ac0e52d149a3316b7b3c7f4f291c6294c616525a9f4886bc3aaeb73024/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae378ac0e52d149a3316b7b3c7f4f291c6294c616525a9f4886bc3aaeb73024/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae378ac0e52d149a3316b7b3c7f4f291c6294c616525a9f4886bc3aaeb73024/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:02 compute-0 python3.9[157620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:09:02 compute-0 podman[157629]: 2026-01-31 08:09:02.180943501 +0000 UTC m=+0.180833316 container init 0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_tharp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:02 compute-0 podman[157629]: 2026-01-31 08:09:02.188424038 +0000 UTC m=+0.188313783 container start 0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:02 compute-0 sudo[157618]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:02 compute-0 podman[157629]: 2026-01-31 08:09:02.209938944 +0000 UTC m=+0.209828779 container attach 0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:09:02 compute-0 funny_tharp[157647]: {
Jan 31 08:09:02 compute-0 funny_tharp[157647]:     "0": [
Jan 31 08:09:02 compute-0 funny_tharp[157647]:         {
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "devices": [
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "/dev/loop3"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             ],
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_name": "ceph_lv0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_size": "21470642176",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "name": "ceph_lv0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "tags": {
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cluster_name": "ceph",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.crush_device_class": "",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.encrypted": "0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.objectstore": "bluestore",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osd_id": "0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.type": "block",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.vdo": "0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.with_tpm": "0"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             },
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "type": "block",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "vg_name": "ceph_vg0"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:         }
Jan 31 08:09:02 compute-0 funny_tharp[157647]:     ],
Jan 31 08:09:02 compute-0 funny_tharp[157647]:     "1": [
Jan 31 08:09:02 compute-0 funny_tharp[157647]:         {
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "devices": [
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "/dev/loop4"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             ],
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_name": "ceph_lv1",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_size": "21470642176",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "name": "ceph_lv1",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "tags": {
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cluster_name": "ceph",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.crush_device_class": "",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.encrypted": "0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.objectstore": "bluestore",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osd_id": "1",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.type": "block",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.vdo": "0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.with_tpm": "0"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             },
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "type": "block",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "vg_name": "ceph_vg1"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:         }
Jan 31 08:09:02 compute-0 funny_tharp[157647]:     ],
Jan 31 08:09:02 compute-0 funny_tharp[157647]:     "2": [
Jan 31 08:09:02 compute-0 funny_tharp[157647]:         {
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "devices": [
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "/dev/loop5"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             ],
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_name": "ceph_lv2",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_size": "21470642176",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "name": "ceph_lv2",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "tags": {
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.cluster_name": "ceph",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.crush_device_class": "",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.encrypted": "0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.objectstore": "bluestore",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osd_id": "2",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.type": "block",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.vdo": "0",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:                 "ceph.with_tpm": "0"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             },
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "type": "block",
Jan 31 08:09:02 compute-0 funny_tharp[157647]:             "vg_name": "ceph_vg2"
Jan 31 08:09:02 compute-0 funny_tharp[157647]:         }
Jan 31 08:09:02 compute-0 funny_tharp[157647]:     ]
Jan 31 08:09:02 compute-0 funny_tharp[157647]: }
Jan 31 08:09:02 compute-0 systemd[1]: libpod-0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7.scope: Deactivated successfully.
Jan 31 08:09:02 compute-0 podman[157629]: 2026-01-31 08:09:02.558536401 +0000 UTC m=+0.558426176 container died 0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:02 compute-0 sudo[157816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buahgaicrxiiszywlqhpyjdkhxeyvlga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846942.3281546-59-132371488042333/AnsiballZ_systemd_service.py'
Jan 31 08:09:02 compute-0 sudo[157816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae378ac0e52d149a3316b7b3c7f4f291c6294c616525a9f4886bc3aaeb73024-merged.mount: Deactivated successfully.
Jan 31 08:09:02 compute-0 python3.9[157818]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:09:02 compute-0 sudo[157816]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:02 compute-0 podman[157629]: 2026-01-31 08:09:02.970741171 +0000 UTC m=+0.970630956 container remove 0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:09:03 compute-0 systemd[1]: libpod-conmon-0ad4f2e24728e635593e2a0da8c185e6fa198a4c46c6eb8601eb7396dd310fd7.scope: Deactivated successfully.
Jan 31 08:09:03 compute-0 sudo[157392]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:03 compute-0 sudo[157845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:03 compute-0 sudo[157845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:03 compute-0 sudo[157845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:03 compute-0 sudo[157900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:09:03 compute-0 sudo[157900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:03 compute-0 ceph-mon[75294]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:03 compute-0 sudo[158020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbphitnmxzjpubdofqqwqjosiphjshfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846943.0919018-59-11885282179042/AnsiballZ_systemd_service.py'
Jan 31 08:09:03 compute-0 sudo[158020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:03 compute-0 podman[158035]: 2026-01-31 08:09:03.472560976 +0000 UTC m=+0.057273880 container create abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:09:03 compute-0 systemd[1]: Started libpod-conmon-abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb.scope.
Jan 31 08:09:03 compute-0 podman[158035]: 2026-01-31 08:09:03.440606499 +0000 UTC m=+0.025319483 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:03 compute-0 podman[158035]: 2026-01-31 08:09:03.575426757 +0000 UTC m=+0.160139731 container init abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:09:03 compute-0 podman[158035]: 2026-01-31 08:09:03.587564444 +0000 UTC m=+0.172277378 container start abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:09:03 compute-0 podman[158035]: 2026-01-31 08:09:03.592762279 +0000 UTC m=+0.177475263 container attach abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:03 compute-0 funny_dubinsky[158051]: 167 167
Jan 31 08:09:03 compute-0 systemd[1]: libpod-abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb.scope: Deactivated successfully.
Jan 31 08:09:03 compute-0 podman[158035]: 2026-01-31 08:09:03.598140568 +0000 UTC m=+0.182853472 container died abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:09:03 compute-0 python3.9[158022]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd8e50917018fea9ff766649a14a1f89843aaffd11aaa9444cd64f5dc1d705b-merged.mount: Deactivated successfully.
Jan 31 08:09:03 compute-0 sudo[158020]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:03 compute-0 podman[158035]: 2026-01-31 08:09:03.691448515 +0000 UTC m=+0.276161429 container remove abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:09:03 compute-0 systemd[1]: libpod-conmon-abc089c19ff98a2dcf015b1fd7a09f9e4fc7911ff4dacc48c32a3af2869538cb.scope: Deactivated successfully.
Jan 31 08:09:03 compute-0 podman[158124]: 2026-01-31 08:09:03.864841593 +0000 UTC m=+0.046346006 container create f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mendeleev, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:09:03 compute-0 systemd[1]: Started libpod-conmon-f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d.scope.
Jan 31 08:09:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c0c06c5d7389b1722ea4d83f1ae3ce15b1d0894bea02683a47edeedf103159/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c0c06c5d7389b1722ea4d83f1ae3ce15b1d0894bea02683a47edeedf103159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c0c06c5d7389b1722ea4d83f1ae3ce15b1d0894bea02683a47edeedf103159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c0c06c5d7389b1722ea4d83f1ae3ce15b1d0894bea02683a47edeedf103159/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 podman[158124]: 2026-01-31 08:09:03.84524848 +0000 UTC m=+0.026752893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:03 compute-0 podman[158124]: 2026-01-31 08:09:03.958467729 +0000 UTC m=+0.139972202 container init f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:03 compute-0 podman[158124]: 2026-01-31 08:09:03.970577875 +0000 UTC m=+0.152082288 container start f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mendeleev, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:09:03 compute-0 podman[158124]: 2026-01-31 08:09:03.976557111 +0000 UTC m=+0.158061574 container attach f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:04 compute-0 sudo[158248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyspouttxgomudjqslkihiirmthmahhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846943.7971816-59-266420967848047/AnsiballZ_systemd_service.py'
Jan 31 08:09:04 compute-0 sudo[158248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:04 compute-0 python3.9[158250]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:09:04 compute-0 sudo[158248]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:04 compute-0 lvm[158399]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:09:04 compute-0 lvm[158399]: VG ceph_vg0 finished
Jan 31 08:09:04 compute-0 lvm[158409]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:09:04 compute-0 lvm[158409]: VG ceph_vg1 finished
Jan 31 08:09:04 compute-0 lvm[158427]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:09:04 compute-0 lvm[158427]: VG ceph_vg2 finished
Jan 31 08:09:04 compute-0 sudo[158480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgosmwcsxvmvkuieotmpswuvmuezeuha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846944.5343015-59-2058371692907/AnsiballZ_systemd_service.py'
Jan 31 08:09:04 compute-0 sudo[158480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:04 compute-0 beautiful_mendeleev[158193]: {}
Jan 31 08:09:04 compute-0 systemd[1]: libpod-f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d.scope: Deactivated successfully.
Jan 31 08:09:04 compute-0 systemd[1]: libpod-f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d.scope: Consumed 1.143s CPU time.
Jan 31 08:09:04 compute-0 podman[158124]: 2026-01-31 08:09:04.848571948 +0000 UTC m=+1.030076401 container died f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3c0c06c5d7389b1722ea4d83f1ae3ce15b1d0894bea02683a47edeedf103159-merged.mount: Deactivated successfully.
Jan 31 08:09:04 compute-0 podman[158124]: 2026-01-31 08:09:04.984219121 +0000 UTC m=+1.165723534 container remove f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:09:04 compute-0 systemd[1]: libpod-conmon-f1d19ed91a43c21a15121831cba3e6811677e06012ee90583c56d720f1a1e45d.scope: Deactivated successfully.
Jan 31 08:09:05 compute-0 sudo[157900]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:09:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:09:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:09:05 compute-0 python3.9[158482]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:09:05 compute-0 sudo[158480]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:09:05 compute-0 ceph-mon[75294]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:09:05 compute-0 sudo[158522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:09:05 compute-0 sudo[158522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:05 compute-0 sudo[158522]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:05 compute-0 sudo[158672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buebvyrspzagkxzlpbgbnfrsnbzhfbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846945.3120003-59-134827860643491/AnsiballZ_systemd_service.py'
Jan 31 08:09:05 compute-0 sudo[158672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:05 compute-0 python3.9[158674]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:09:05 compute-0 sudo[158672]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:09:06 compute-0 sudo[158825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlvehshhzfgovuoocnajeurekjeyaykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846946.05171-59-91383252049100/AnsiballZ_systemd_service.py'
Jan 31 08:09:06 compute-0 sudo[158825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:06 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:09:06 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:09:06 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:09:06 compute-0 python3.9[158827]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:09:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:09:06 compute-0 sudo[158825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:07 compute-0 sudo[158979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqwwpqzitpsvrslmejzpqtvenmxnqcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846947.1732752-111-2898089945502/AnsiballZ_file.py'
Jan 31 08:09:07 compute-0 sudo[158979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:07 compute-0 python3.9[158981]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:07 compute-0 sudo[158979]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:07 compute-0 ceph-mon[75294]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:08 compute-0 sudo[159131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqrtguaronhzcytbuxmctfdilctbwees ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846947.9481134-111-103772807370795/AnsiballZ_file.py'
Jan 31 08:09:08 compute-0 sudo[159131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:08 compute-0 python3.9[159133]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:08 compute-0 sudo[159131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:08 compute-0 sudo[159283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dswduvkrsrokmmnotqfnxyzfbetxbeqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846948.5374293-111-158723740668128/AnsiballZ_file.py'
Jan 31 08:09:08 compute-0 sudo[159283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:08 compute-0 python3.9[159285]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:08 compute-0 sudo[159283]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:09 compute-0 ceph-mon[75294]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:09 compute-0 sudo[159435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcggdrswlnrbnuecklvsqynfdsaeqaid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846949.046766-111-99588334054844/AnsiballZ_file.py'
Jan 31 08:09:09 compute-0 sudo[159435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:09 compute-0 python3.9[159437]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:09 compute-0 sudo[159435]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:09 compute-0 sudo[159587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtjwjpqmibzupplbdpkybyazzzmnajfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846949.6303186-111-230222484732687/AnsiballZ_file.py'
Jan 31 08:09:09 compute-0 sudo[159587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:10 compute-0 python3.9[159589]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:10 compute-0 sudo[159587]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:10 compute-0 sudo[159749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqcmdreoqpeeepsjzwbqtxinmicwuur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846950.1754677-111-181543836694424/AnsiballZ_file.py'
Jan 31 08:09:10 compute-0 sudo[159749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:10 compute-0 podman[159713]: 2026-01-31 08:09:10.431130938 +0000 UTC m=+0.071161596 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 08:09:10 compute-0 python3.9[159752]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:10 compute-0 sudo[159749]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:10 compute-0 sudo[159917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsyjvfjryhyodinrbddczkqcfeyuoqye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846950.7095406-111-90488651150264/AnsiballZ_file.py'
Jan 31 08:09:10 compute-0 sudo[159917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:11 compute-0 python3.9[159919]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:11 compute-0 sudo[159917]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:11 compute-0 ceph-mon[75294]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:11 compute-0 sudo[160069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrztgoodhyfappztxojuqbkmilkqyqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846951.2487104-161-217589052467364/AnsiballZ_file.py'
Jan 31 08:09:11 compute-0 sudo[160069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:11 compute-0 python3.9[160071]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:11 compute-0 sudo[160069]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:11 compute-0 sudo[160221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufrqlrimhspytkingbuczkptdlrbwqtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846951.7544477-161-203163594722945/AnsiballZ_file.py'
Jan 31 08:09:11 compute-0 sudo[160221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:12 compute-0 python3.9[160223]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:12 compute-0 sudo[160221]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:12 compute-0 sudo[160373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-occwzmkurwbakwchbmriqjtxyxaeupwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846952.2790725-161-125925241269280/AnsiballZ_file.py'
Jan 31 08:09:12 compute-0 sudo[160373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:12 compute-0 python3.9[160375]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:12 compute-0 sudo[160373]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:13 compute-0 sudo[160525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvruiucbfwjlbaqbwboxrhhrdffjbnrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846952.8135319-161-47657819227012/AnsiballZ_file.py'
Jan 31 08:09:13 compute-0 sudo[160525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:13 compute-0 ceph-mon[75294]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:13 compute-0 python3.9[160527]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:13 compute-0 sudo[160525]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:13 compute-0 sudo[160677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peudncfdetjbkylujejnopvslfavkxyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846953.2865918-161-73684057168848/AnsiballZ_file.py'
Jan 31 08:09:13 compute-0 sudo[160677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:13 compute-0 python3.9[160679]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:13 compute-0 sudo[160677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:14 compute-0 sudo[160829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbyloiegteseaskzzvpnpzjyuulksee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846953.8343744-161-24908216099520/AnsiballZ_file.py'
Jan 31 08:09:14 compute-0 sudo[160829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:14 compute-0 python3.9[160831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:14 compute-0 sudo[160829]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:14 compute-0 sudo[160981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tetaaehfmzshfapkjfzsomhjzhopjryr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846954.3935678-161-225482264669961/AnsiballZ_file.py'
Jan 31 08:09:14 compute-0 sudo[160981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:14 compute-0 python3.9[160983]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:09:14 compute-0 sudo[160981]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:15 compute-0 podman[161032]: 2026-01-31 08:09:15.177605388 +0000 UTC m=+0.048158931 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:09:15 compute-0 sudo[161152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddxndgriipyihfzspaiaryzcabwclxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846955.0869422-212-246787258116163/AnsiballZ_command.py'
Jan 31 08:09:15 compute-0 sudo[161152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:15 compute-0 ceph-mon[75294]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:15 compute-0 python3.9[161154]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:15 compute-0 sudo[161152]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:16 compute-0 python3.9[161306]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:09:16 compute-0 sudo[161456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpxdxphrdrfzfdovkahwwqfnqkvgcdgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846956.5282595-230-249767698721439/AnsiballZ_systemd_service.py'
Jan 31 08:09:16 compute-0 sudo[161456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:16 compute-0 ceph-mon[75294]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:17 compute-0 python3.9[161458]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:09:17 compute-0 systemd[1]: Reloading.
Jan 31 08:09:17 compute-0 systemd-rc-local-generator[161481]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:09:17 compute-0 systemd-sysv-generator[161485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:09:17 compute-0 sudo[161456]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:17 compute-0 sudo[161644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqzgfsswitqnsugsnzszoudhvgqxikci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846957.446347-238-114287474914017/AnsiballZ_command.py'
Jan 31 08:09:17 compute-0 sudo[161644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:17 compute-0 python3.9[161646]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:17 compute-0 sudo[161644]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:18 compute-0 sudo[161797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndtmmhlhdcpjcwogtukhshzewgbgknou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846958.084279-238-154359062662177/AnsiballZ_command.py'
Jan 31 08:09:18 compute-0 sudo[161797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:18 compute-0 python3.9[161799]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:18 compute-0 sudo[161797]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:18 compute-0 sudo[161950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwodsesykcomdgkkgokmlqznuyllgwso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846958.6503685-238-278475394013042/AnsiballZ_command.py'
Jan 31 08:09:18 compute-0 sudo[161950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:18 compute-0 ceph-mon[75294]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:19 compute-0 python3.9[161952]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:19 compute-0 sudo[161950]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:19 compute-0 sudo[162103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokdcymmpgtvpwcomibjovepesveoswq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846959.2081-238-41059474310168/AnsiballZ_command.py'
Jan 31 08:09:19 compute-0 sudo[162103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:19 compute-0 python3.9[162105]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:19 compute-0 sudo[162103]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:19 compute-0 sudo[162256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxzzgqqypfskzqeuohqonnrshqfjanzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846959.7333097-238-220213015784955/AnsiballZ_command.py'
Jan 31 08:09:19 compute-0 sudo[162256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:20 compute-0 python3.9[162258]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:20 compute-0 sudo[162256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:20 compute-0 sudo[162409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbrjmikmwefpeqlcvzlhbntoxdbdwler ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846960.2509198-238-160116115566414/AnsiballZ_command.py'
Jan 31 08:09:20 compute-0 sudo[162409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:20 compute-0 python3.9[162411]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:20 compute-0 sudo[162409]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:21 compute-0 ceph-mon[75294]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:21 compute-0 sudo[162562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiiwmaqxkjbemdykatfezvtwvetdnwal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846960.824182-238-39210912101794/AnsiballZ_command.py'
Jan 31 08:09:21 compute-0 sudo[162562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:21 compute-0 python3.9[162564]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:21 compute-0 sudo[162562]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:21 compute-0 sudo[162715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lptymerwpchqrajayyzlnydfuluagnkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846961.5816317-292-32976739532964/AnsiballZ_getent.py'
Jan 31 08:09:21 compute-0 sudo[162715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:22 compute-0 python3.9[162717]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 08:09:22 compute-0 sudo[162715]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:22 compute-0 sudo[162868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkntkpxfwsieooqdphnujnvgvdgwfbqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846962.3085754-300-196976264888098/AnsiballZ_group.py'
Jan 31 08:09:22 compute-0 sudo[162868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:22 compute-0 python3.9[162870]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:09:23 compute-0 groupadd[162871]: group added to /etc/group: name=libvirt, GID=42473
Jan 31 08:09:23 compute-0 groupadd[162871]: group added to /etc/gshadow: name=libvirt
Jan 31 08:09:23 compute-0 groupadd[162871]: new group: name=libvirt, GID=42473
Jan 31 08:09:23 compute-0 ceph-mon[75294]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:23 compute-0 sudo[162868]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:23 compute-0 sudo[163026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzoduhkehiuwqpujfblmtcdvdfzxctat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846963.2877333-308-223758612680909/AnsiballZ_user.py'
Jan 31 08:09:23 compute-0 sudo[163026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:23 compute-0 python3.9[163028]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 08:09:24 compute-0 useradd[163030]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 08:09:24 compute-0 sudo[163026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:24 compute-0 sudo[163186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqcabfgjfkcufucsjxycqhthnzitgyoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846964.4130237-319-227382847899173/AnsiballZ_setup.py'
Jan 31 08:09:24 compute-0 sudo[163186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:24 compute-0 python3.9[163188]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:09:25 compute-0 sudo[163186]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:25 compute-0 ceph-mon[75294]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:25 compute-0 sudo[163270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgymxgnogcyyyhpydfkjqzmohahfjmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846964.4130237-319-227382847899173/AnsiballZ_dnf.py'
Jan 31 08:09:25 compute-0 sudo[163270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:25 compute-0 python3.9[163272]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:09:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:27 compute-0 ceph-mon[75294]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:28 compute-0 ceph-mon[75294]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:30 compute-0 ceph-mon[75294]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:32 compute-0 ceph-mon[75294]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:34 compute-0 ceph-mon[75294]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:36 compute-0 ceph-mon[75294]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:38 compute-0 ceph-mon[75294]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:40 compute-0 ceph-mon[75294]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:41 compute-0 podman[163458]: 2026-01-31 08:09:41.221292982 +0000 UTC m=+0.078864242 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:09:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:42 compute-0 ceph-mon[75294]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:44 compute-0 ceph-mon[75294]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:46 compute-0 podman[163491]: 2026-01-31 08:09:46.2205731 +0000 UTC m=+0.086689922 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:09:46 compute-0 ceph-mon[75294]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:09:46.950 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:09:46.951 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:09:46.951 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:48 compute-0 ceph-mon[75294]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:09:50
Jan 31 08:09:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:09:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:09:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Jan 31 08:09:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:09:50 compute-0 ceph-mon[75294]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:52 compute-0 ceph-mon[75294]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:54 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 08:09:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:09:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:09:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:09:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:09:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:09:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:09:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:09:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:55 compute-0 ceph-mon[75294]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:09:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:09:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:57 compute-0 ceph-mon[75294]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:59 compute-0 ceph-mon[75294]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:01 compute-0 ceph-mon[75294]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:03 compute-0 ceph-mon[75294]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:03 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 08:10:03 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:10:03 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:10:03 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:10:03 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:10:03 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:10:03 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:10:03 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:10:05 compute-0 ceph-mon[75294]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:05 compute-0 sudo[163524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:05 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 08:10:05 compute-0 sudo[163524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:05 compute-0 sudo[163524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:05 compute-0 sudo[163549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:10:05 compute-0 sudo[163549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:05 compute-0 sudo[163549]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:10:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:10:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:10:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:10:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:10:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:10:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:10:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:06 compute-0 sudo[163606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:06 compute-0 sudo[163606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:06 compute-0 sudo[163606]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:10:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:06 compute-0 sudo[163631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:10:06 compute-0 sudo[163631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:10:06 compute-0 podman[163669]: 2026-01-31 08:10:06.469914862 +0000 UTC m=+0.096609530 container create 14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:06 compute-0 podman[163669]: 2026-01-31 08:10:06.406454823 +0000 UTC m=+0.033149571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:06 compute-0 systemd[1]: Started libpod-conmon-14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5.scope.
Jan 31 08:10:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:06 compute-0 podman[163669]: 2026-01-31 08:10:06.584139174 +0000 UTC m=+0.210833882 container init 14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:06 compute-0 podman[163669]: 2026-01-31 08:10:06.592105758 +0000 UTC m=+0.218800456 container start 14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:06 compute-0 elastic_herschel[163685]: 167 167
Jan 31 08:10:06 compute-0 systemd[1]: libpod-14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5.scope: Deactivated successfully.
Jan 31 08:10:06 compute-0 podman[163669]: 2026-01-31 08:10:06.61427756 +0000 UTC m=+0.240972268 container attach 14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:10:06 compute-0 podman[163669]: 2026-01-31 08:10:06.615339609 +0000 UTC m=+0.242034277 container died 14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-09ba93c300956cb7782f5ca6725b7dfe2abc0df1891e08ce3cd4f401e31e65c6-merged.mount: Deactivated successfully.
Jan 31 08:10:06 compute-0 podman[163669]: 2026-01-31 08:10:06.775144349 +0000 UTC m=+0.401839017 container remove 14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_herschel, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:10:06 compute-0 systemd[1]: libpod-conmon-14771db8ad39c7a95d80fc908c98870d6c36a6abc807eab44b7f91f0c326dae5.scope: Deactivated successfully.
Jan 31 08:10:07 compute-0 podman[163708]: 2026-01-31 08:10:06.939121307 +0000 UTC m=+0.032263886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:07 compute-0 podman[163708]: 2026-01-31 08:10:07.112746565 +0000 UTC m=+0.205889094 container create 8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wiles, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:07 compute-0 systemd[1]: Started libpod-conmon-8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4.scope.
Jan 31 08:10:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8f3b1069a205f7d101ea0128ce97ca86076a342c85e0c76bbfa34ebabba84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8f3b1069a205f7d101ea0128ce97ca86076a342c85e0c76bbfa34ebabba84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8f3b1069a205f7d101ea0128ce97ca86076a342c85e0c76bbfa34ebabba84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8f3b1069a205f7d101ea0128ce97ca86076a342c85e0c76bbfa34ebabba84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8f3b1069a205f7d101ea0128ce97ca86076a342c85e0c76bbfa34ebabba84/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:07 compute-0 ceph-mon[75294]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:07 compute-0 podman[163708]: 2026-01-31 08:10:07.28416604 +0000 UTC m=+0.377308539 container init 8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wiles, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:07 compute-0 podman[163708]: 2026-01-31 08:10:07.293958035 +0000 UTC m=+0.387100554 container start 8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wiles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:10:07 compute-0 podman[163708]: 2026-01-31 08:10:07.333692708 +0000 UTC m=+0.426835227 container attach 8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wiles, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:10:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:08 compute-0 ceph-mon[75294]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:08 compute-0 beautiful_wiles[163725]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:10:08 compute-0 beautiful_wiles[163725]: --> All data devices are unavailable
Jan 31 08:10:08 compute-0 systemd[1]: libpod-8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4.scope: Deactivated successfully.
Jan 31 08:10:08 compute-0 podman[163708]: 2026-01-31 08:10:08.657590935 +0000 UTC m=+1.750733424 container died 8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wiles, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bd8f3b1069a205f7d101ea0128ce97ca86076a342c85e0c76bbfa34ebabba84-merged.mount: Deactivated successfully.
Jan 31 08:10:08 compute-0 podman[163708]: 2026-01-31 08:10:08.874474394 +0000 UTC m=+1.967616923 container remove 8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:10:08 compute-0 systemd[1]: libpod-conmon-8e08c44ada84b7082479849fb3cd174ac5ad46fcfa2865ad46818ff6c400c3d4.scope: Deactivated successfully.
Jan 31 08:10:08 compute-0 sudo[163631]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:08 compute-0 sudo[163758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:08 compute-0 sudo[163758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:08 compute-0 sudo[163758]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:09 compute-0 sudo[163783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:10:09 compute-0 sudo[163783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:09 compute-0 podman[163820]: 2026-01-31 08:10:09.301335171 +0000 UTC m=+0.065600460 container create 22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:10:09 compute-0 podman[163820]: 2026-01-31 08:10:09.251935094 +0000 UTC m=+0.016200403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:09 compute-0 systemd[1]: Started libpod-conmon-22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c.scope.
Jan 31 08:10:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:09 compute-0 podman[163820]: 2026-01-31 08:10:09.420115144 +0000 UTC m=+0.184380453 container init 22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:10:09 compute-0 podman[163820]: 2026-01-31 08:10:09.42762772 +0000 UTC m=+0.191893019 container start 22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:09 compute-0 funny_kapitsa[163837]: 167 167
Jan 31 08:10:09 compute-0 systemd[1]: libpod-22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c.scope: Deactivated successfully.
Jan 31 08:10:09 compute-0 podman[163820]: 2026-01-31 08:10:09.457511478 +0000 UTC m=+0.221776847 container attach 22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kapitsa, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:10:09 compute-0 podman[163820]: 2026-01-31 08:10:09.458828432 +0000 UTC m=+0.223093761 container died 22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:10:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b099ada47c401ba582392d77fc006f17d48ca1628acbecf4650b1ecfa4569e00-merged.mount: Deactivated successfully.
Jan 31 08:10:09 compute-0 podman[163820]: 2026-01-31 08:10:09.731069533 +0000 UTC m=+0.495334832 container remove 22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:10:09 compute-0 systemd[1]: libpod-conmon-22600a6e67741c29b5f51d5e76f0d6e35aedb684cc01eb55b30b3956d6cd726c.scope: Deactivated successfully.
Jan 31 08:10:09 compute-0 podman[163862]: 2026-01-31 08:10:09.867187728 +0000 UTC m=+0.061666497 container create a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:10:09 compute-0 podman[163862]: 2026-01-31 08:10:09.834417934 +0000 UTC m=+0.028896653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:09 compute-0 systemd[1]: Started libpod-conmon-a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a.scope.
Jan 31 08:10:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a28641a49c86f1e4ab4c4872cd17ccfa314134ddba8e0824f87538af56bfd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a28641a49c86f1e4ab4c4872cd17ccfa314134ddba8e0824f87538af56bfd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a28641a49c86f1e4ab4c4872cd17ccfa314134ddba8e0824f87538af56bfd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a28641a49c86f1e4ab4c4872cd17ccfa314134ddba8e0824f87538af56bfd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:10 compute-0 podman[163862]: 2026-01-31 08:10:10.047259937 +0000 UTC m=+0.241738706 container init a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:10:10 compute-0 podman[163862]: 2026-01-31 08:10:10.055954585 +0000 UTC m=+0.250433344 container start a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:10 compute-0 podman[163862]: 2026-01-31 08:10:10.069557248 +0000 UTC m=+0.264035997 container attach a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]: {
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:     "0": [
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:         {
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "devices": [
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "/dev/loop3"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             ],
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_name": "ceph_lv0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_size": "21470642176",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "name": "ceph_lv0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "tags": {
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cluster_name": "ceph",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.crush_device_class": "",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.encrypted": "0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.objectstore": "bluestore",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osd_id": "0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.type": "block",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.vdo": "0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.with_tpm": "0"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             },
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "type": "block",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "vg_name": "ceph_vg0"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:         }
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:     ],
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:     "1": [
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:         {
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "devices": [
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "/dev/loop4"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             ],
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_name": "ceph_lv1",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_size": "21470642176",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "name": "ceph_lv1",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "tags": {
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cluster_name": "ceph",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.crush_device_class": "",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.encrypted": "0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.objectstore": "bluestore",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osd_id": "1",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.type": "block",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.vdo": "0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.with_tpm": "0"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             },
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "type": "block",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "vg_name": "ceph_vg1"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:         }
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:     ],
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:     "2": [
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:         {
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "devices": [
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "/dev/loop5"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             ],
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_name": "ceph_lv2",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_size": "21470642176",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "name": "ceph_lv2",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "tags": {
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.cluster_name": "ceph",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.crush_device_class": "",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.encrypted": "0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.objectstore": "bluestore",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osd_id": "2",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.type": "block",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.vdo": "0",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:                 "ceph.with_tpm": "0"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             },
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "type": "block",
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:             "vg_name": "ceph_vg2"
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:         }
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]:     ]
Jan 31 08:10:10 compute-0 keen_mendeleev[163878]: }
Jan 31 08:10:10 compute-0 systemd[1]: libpod-a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a.scope: Deactivated successfully.
Jan 31 08:10:10 compute-0 podman[163862]: 2026-01-31 08:10:10.364523091 +0000 UTC m=+0.559001820 container died a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-38a28641a49c86f1e4ab4c4872cd17ccfa314134ddba8e0824f87538af56bfd6-merged.mount: Deactivated successfully.
Jan 31 08:10:10 compute-0 podman[163862]: 2026-01-31 08:10:10.547037344 +0000 UTC m=+0.741516063 container remove a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:10:10 compute-0 systemd[1]: libpod-conmon-a57403de1046d65a7d9c2afcbdca2b2efa26ddc468c53e2020418d75b1b0f01a.scope: Deactivated successfully.
Jan 31 08:10:10 compute-0 sudo[163783]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:10 compute-0 ceph-mon[75294]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:10 compute-0 sudo[163901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:10 compute-0 sudo[163901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:10 compute-0 sudo[163901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:10 compute-0 sudo[163926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:10:10 compute-0 sudo[163926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:11 compute-0 podman[163963]: 2026-01-31 08:10:10.970745829 +0000 UTC m=+0.016943262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:11 compute-0 podman[163963]: 2026-01-31 08:10:11.169784793 +0000 UTC m=+0.215982206 container create 484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_benz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:10:11 compute-0 systemd[1]: Started libpod-conmon-484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c.scope.
Jan 31 08:10:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:11 compute-0 podman[163963]: 2026-01-31 08:10:11.328969509 +0000 UTC m=+0.375166932 container init 484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:10:11 compute-0 podman[163963]: 2026-01-31 08:10:11.335095758 +0000 UTC m=+0.381293181 container start 484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_benz, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:11 compute-0 keen_benz[163981]: 167 167
Jan 31 08:10:11 compute-0 systemd[1]: libpod-484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c.scope: Deactivated successfully.
Jan 31 08:10:11 compute-0 podman[163963]: 2026-01-31 08:10:11.354803601 +0000 UTC m=+0.401001034 container attach 484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:11 compute-0 podman[163963]: 2026-01-31 08:10:11.355748696 +0000 UTC m=+0.401946099 container died 484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_benz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e858af917d0b36ede48089a74fd137a415c5729b04509bcfc129526019f0f085-merged.mount: Deactivated successfully.
Jan 31 08:10:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:11 compute-0 podman[163963]: 2026-01-31 08:10:11.876230092 +0000 UTC m=+0.922427505 container remove 484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:11 compute-0 systemd[1]: libpod-conmon-484ebe07059f36901ba926cfe909f6a4e555061d2f385b7d60d1de78f8d0480c.scope: Deactivated successfully.
Jan 31 08:10:12 compute-0 podman[163980]: 2026-01-31 08:10:12.021731582 +0000 UTC m=+0.767375347 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:10:12 compute-0 podman[164027]: 2026-01-31 08:10:12.017578744 +0000 UTC m=+0.029865760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:12 compute-0 podman[164027]: 2026-01-31 08:10:12.140618287 +0000 UTC m=+0.152905263 container create f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:10:12 compute-0 systemd[1]: Started libpod-conmon-f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926.scope.
Jan 31 08:10:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1691aeab64d22df15a967c81b973d03089115070010ca81682b2bdc53d87fbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1691aeab64d22df15a967c81b973d03089115070010ca81682b2bdc53d87fbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1691aeab64d22df15a967c81b973d03089115070010ca81682b2bdc53d87fbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1691aeab64d22df15a967c81b973d03089115070010ca81682b2bdc53d87fbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:12 compute-0 podman[164027]: 2026-01-31 08:10:12.332742462 +0000 UTC m=+0.345029458 container init f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:10:12 compute-0 podman[164027]: 2026-01-31 08:10:12.34151306 +0000 UTC m=+0.353800036 container start f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:12 compute-0 podman[164027]: 2026-01-31 08:10:12.343896882 +0000 UTC m=+0.356183858 container attach f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:10:12 compute-0 ceph-mon[75294]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:12 compute-0 lvm[164125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:10:12 compute-0 lvm[164126]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:10:12 compute-0 lvm[164125]: VG ceph_vg0 finished
Jan 31 08:10:12 compute-0 lvm[164126]: VG ceph_vg1 finished
Jan 31 08:10:12 compute-0 lvm[164128]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:10:12 compute-0 lvm[164128]: VG ceph_vg2 finished
Jan 31 08:10:13 compute-0 affectionate_bardeen[164047]: {}
Jan 31 08:10:13 compute-0 systemd[1]: libpod-f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926.scope: Deactivated successfully.
Jan 31 08:10:13 compute-0 podman[164027]: 2026-01-31 08:10:13.112229342 +0000 UTC m=+1.124516398 container died f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:13 compute-0 systemd[1]: libpod-f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926.scope: Consumed 1.043s CPU time.
Jan 31 08:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1691aeab64d22df15a967c81b973d03089115070010ca81682b2bdc53d87fbe-merged.mount: Deactivated successfully.
Jan 31 08:10:13 compute-0 podman[164027]: 2026-01-31 08:10:13.15131551 +0000 UTC m=+1.163602486 container remove f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:10:13 compute-0 systemd[1]: libpod-conmon-f13d08bdc2ad7551d0b9649cebf55511852a8a8dfa979db9c4de68ece1f51926.scope: Deactivated successfully.
Jan 31 08:10:13 compute-0 sudo[163926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:10:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:10:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:10:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:10:13 compute-0 sudo[164143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:10:13 compute-0 sudo[164143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:13 compute-0 sudo[164143]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:10:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:10:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:10:15 compute-0 ceph-mon[75294]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:10:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:10:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:17 compute-0 podman[165998]: 2026-01-31 08:10:17.178248359 +0000 UTC m=+0.044232693 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:10:17 compute-0 ceph-mon[75294]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:10:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:10:18 compute-0 ceph-mon[75294]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:10:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:10:20 compute-0 ceph-mon[75294]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:10:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:10:22 compute-0 ceph-mon[75294]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:10:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:10:24 compute-0 ceph-mon[75294]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:10:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:10:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:26 compute-0 ceph-mon[75294]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:10:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:10:28 compute-0 ceph-mon[75294]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:10:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:10:30 compute-0 ceph-mon[75294]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:10:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:32 compute-0 ceph-mon[75294]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:35 compute-0 ceph-mon[75294]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:37 compute-0 ceph-mon[75294]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:39 compute-0 ceph-mon[75294]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:40 compute-0 ceph-mon[75294]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:42 compute-0 podman[181057]: 2026-01-31 08:10:42.213450903 +0000 UTC m=+0.081330609 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:10:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:43 compute-0 ceph-mon[75294]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:46 compute-0 ceph-mon[75294]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:10:46.951 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:10:46.952 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:10:46.952 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:48 compute-0 podman[181088]: 2026-01-31 08:10:48.192809273 +0000 UTC m=+0.068368531 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 08:10:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:10:50
Jan 31 08:10:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:10:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:10:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', 'images']
Jan 31 08:10:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:10:50 compute-0 ceph-mon[75294]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:50 compute-0 ceph-mon[75294]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:51 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 08:10:51 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:10:51 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:10:51 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:10:51 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:10:51 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:10:51 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:10:51 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:10:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:52 compute-0 ceph-mon[75294]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:53 compute-0 ceph-mon[75294]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.146279) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847053146339, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2034, "num_deletes": 251, "total_data_size": 3483958, "memory_usage": 3523040, "flush_reason": "Manual Compaction"}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847053197358, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3418514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9877, "largest_seqno": 11910, "table_properties": {"data_size": 3409343, "index_size": 5793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17753, "raw_average_key_size": 19, "raw_value_size": 3391102, "raw_average_value_size": 3714, "num_data_blocks": 263, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846814, "oldest_key_time": 1769846814, "file_creation_time": 1769847053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 51109 microseconds, and 5956 cpu microseconds.
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.197400) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3418514 bytes OK
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.197416) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.207212) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.207225) EVENT_LOG_v1 {"time_micros": 1769847053207222, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.207241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3475493, prev total WAL file size 3475493, number of live WAL files 2.
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.207878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3338KB)], [26(6569KB)]
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847053207920, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10145374, "oldest_snapshot_seqno": -1}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3780 keys, 8310970 bytes, temperature: kUnknown
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847053258461, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8310970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8281849, "index_size": 18579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 90676, "raw_average_key_size": 23, "raw_value_size": 8209776, "raw_average_value_size": 2171, "num_data_blocks": 800, "num_entries": 3780, "num_filter_entries": 3780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769847053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.258669) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8310970 bytes
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.262409) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.5 rd, 164.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.4 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 4294, records dropped: 514 output_compression: NoCompression
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.262438) EVENT_LOG_v1 {"time_micros": 1769847053262424, "job": 10, "event": "compaction_finished", "compaction_time_micros": 50598, "compaction_time_cpu_micros": 13560, "output_level": 6, "num_output_files": 1, "total_output_size": 8310970, "num_input_records": 4294, "num_output_records": 3780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847053262767, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847053263295, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.207835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.263314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.263318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.263319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.263320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:10:53.263321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:53 compute-0 groupadd[181116]: group added to /etc/group: name=dnsmasq, GID=992
Jan 31 08:10:53 compute-0 groupadd[181116]: group added to /etc/gshadow: name=dnsmasq
Jan 31 08:10:53 compute-0 groupadd[181116]: new group: name=dnsmasq, GID=992
Jan 31 08:10:53 compute-0 useradd[181123]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 31 08:10:53 compute-0 dbus-broker-launch[786]: Noticed file-system modification, trigger reload.
Jan 31 08:10:53 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 08:10:53 compute-0 dbus-broker-launch[786]: Noticed file-system modification, trigger reload.
Jan 31 08:10:54 compute-0 groupadd[181136]: group added to /etc/group: name=clevis, GID=991
Jan 31 08:10:54 compute-0 groupadd[181136]: group added to /etc/gshadow: name=clevis
Jan 31 08:10:54 compute-0 groupadd[181136]: new group: name=clevis, GID=991
Jan 31 08:10:54 compute-0 useradd[181143]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 31 08:10:54 compute-0 usermod[181153]: add 'clevis' to group 'tss'
Jan 31 08:10:54 compute-0 usermod[181153]: add 'clevis' to shadow group 'tss'
Jan 31 08:10:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:55 compute-0 ceph-mon[75294]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:10:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:10:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:56 compute-0 polkitd[43559]: Reloading rules
Jan 31 08:10:56 compute-0 polkitd[43559]: Collecting garbage unconditionally...
Jan 31 08:10:56 compute-0 polkitd[43559]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 08:10:56 compute-0 polkitd[43559]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 08:10:56 compute-0 polkitd[43559]: Finished loading, compiling and executing 3 rules
Jan 31 08:10:56 compute-0 polkitd[43559]: Reloading rules
Jan 31 08:10:56 compute-0 polkitd[43559]: Collecting garbage unconditionally...
Jan 31 08:10:56 compute-0 polkitd[43559]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 08:10:56 compute-0 polkitd[43559]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 08:10:56 compute-0 polkitd[43559]: Finished loading, compiling and executing 3 rules
Jan 31 08:10:57 compute-0 ceph-mon[75294]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:59 compute-0 ceph-mon[75294]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:00 compute-0 sshd[1002]: Received signal 15; terminating.
Jan 31 08:11:00 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 08:11:00 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 08:11:00 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 08:11:00 compute-0 systemd[1]: sshd.service: Consumed 3.305s CPU time, read 32.0K from disk, written 124.0K to disk.
Jan 31 08:11:00 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 08:11:00 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 31 08:11:00 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:11:00 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:11:00 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:11:00 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 31 08:11:00 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 31 08:11:00 compute-0 sshd[181961]: Server listening on 0.0.0.0 port 22.
Jan 31 08:11:00 compute-0 sshd[181961]: Server listening on :: port 22.
Jan 31 08:11:00 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 31 08:11:01 compute-0 ceph-mon[75294]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:11:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:11:02 compute-0 systemd[1]: Reloading.
Jan 31 08:11:02 compute-0 systemd-rc-local-generator[182210]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:02 compute-0 systemd-sysv-generator[182220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:11:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:05 compute-0 ceph-mon[75294]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:11:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:06 compute-0 ceph-mon[75294]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:06 compute-0 ceph-mon[75294]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:07 compute-0 sudo[163270]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:07 compute-0 sudo[189839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dodhdwemqokzscfroxbcqsyofnxslnfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847067.1449556-331-170151527803185/AnsiballZ_systemd.py'
Jan 31 08:11:07 compute-0 sudo[189839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:08 compute-0 python3.9[189867]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:11:08 compute-0 systemd[1]: Reloading.
Jan 31 08:11:08 compute-0 systemd-rc-local-generator[190470]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:08 compute-0 systemd-sysv-generator[190479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:08 compute-0 sudo[189839]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:11:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:11:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.644s CPU time.
Jan 31 08:11:08 compute-0 systemd[1]: run-rb6f65b0da18c406ba1ee27fb3bbfdb95.service: Deactivated successfully.
Jan 31 08:11:08 compute-0 ceph-mon[75294]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:08 compute-0 sudo[190947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osuiacnbrdnsfcscpwumtotfcannbwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847068.5522034-331-156691498792667/AnsiballZ_systemd.py'
Jan 31 08:11:08 compute-0 sudo[190947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:09 compute-0 python3.9[190949]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:11:09 compute-0 systemd[1]: Reloading.
Jan 31 08:11:09 compute-0 systemd-sysv-generator[190978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:09 compute-0 systemd-rc-local-generator[190975]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:09 compute-0 sudo[190947]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:09 compute-0 sudo[191137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrimnyvybqqrbfitbfiqlkclvtzpgpkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847069.4476867-331-63135577229467/AnsiballZ_systemd.py'
Jan 31 08:11:09 compute-0 sudo[191137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:09 compute-0 python3.9[191139]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:11:09 compute-0 systemd[1]: Reloading.
Jan 31 08:11:10 compute-0 systemd-rc-local-generator[191166]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:10 compute-0 systemd-sysv-generator[191172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:10 compute-0 sudo[191137]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:10 compute-0 sudo[191327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoqrbrqvpztdgiayxkbbpqhsuiqtffvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847070.4212165-331-266286786529364/AnsiballZ_systemd.py'
Jan 31 08:11:10 compute-0 sudo[191327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:10 compute-0 ceph-mon[75294]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:10 compute-0 python3.9[191329]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:11:10 compute-0 systemd[1]: Reloading.
Jan 31 08:11:11 compute-0 systemd-sysv-generator[191357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:11 compute-0 systemd-rc-local-generator[191354]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:11 compute-0 sudo[191327]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:11 compute-0 sudo[191517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onwccitqnzguwmpxcnomulbypbibuopl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847071.3969908-360-99056280237237/AnsiballZ_systemd.py'
Jan 31 08:11:11 compute-0 sudo[191517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:11 compute-0 python3.9[191519]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:11 compute-0 systemd[1]: Reloading.
Jan 31 08:11:12 compute-0 systemd-sysv-generator[191550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:12 compute-0 systemd-rc-local-generator[191546]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:12 compute-0 sudo[191517]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:12 compute-0 podman[191558]: 2026-01-31 08:11:12.377910848 +0000 UTC m=+0.119817362 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:11:12 compute-0 sudo[191735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvgxcjsfcqxdlfznbjrntzgwjyddgsio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847072.3807404-360-259356934088343/AnsiballZ_systemd.py'
Jan 31 08:11:12 compute-0 sudo[191735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:12 compute-0 ceph-mon[75294]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:12 compute-0 python3.9[191737]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:13 compute-0 systemd[1]: Reloading.
Jan 31 08:11:13 compute-0 systemd-sysv-generator[191775]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:13 compute-0 systemd-rc-local-generator[191770]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:13 compute-0 sudo[191735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:13 compute-0 sudo[191777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:13 compute-0 sudo[191777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:13 compute-0 sudo[191777]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:13 compute-0 sudo[191802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:11:13 compute-0 sudo[191802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:13 compute-0 sudo[191990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yadlbotsoipiffyknbzdcxicumzsvlcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847073.4143202-360-63816981570716/AnsiballZ_systemd.py'
Jan 31 08:11:13 compute-0 sudo[191990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:13 compute-0 podman[192023]: 2026-01-31 08:11:13.776360668 +0000 UTC m=+0.065212636 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:11:13 compute-0 podman[192023]: 2026-01-31 08:11:13.882985391 +0000 UTC m=+0.171837369 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:11:13 compute-0 python3.9[191995]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:13 compute-0 systemd[1]: Reloading.
Jan 31 08:11:14 compute-0 systemd-rc-local-generator[192112]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:14 compute-0 systemd-sysv-generator[192117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:14 compute-0 sudo[191990]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:14 compute-0 sudo[192369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rawcoimqyzudrrfgxtpmoigvmzupquxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847074.3808875-360-214744746624102/AnsiballZ_systemd.py'
Jan 31 08:11:14 compute-0 sudo[192369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:14 compute-0 sudo[191802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:11:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:11:14 compute-0 ceph-mon[75294]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:14 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:14 compute-0 sudo[192398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:14 compute-0 sudo[192398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:14 compute-0 sudo[192398]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:14 compute-0 sudo[192423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:11:14 compute-0 sudo[192423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:14 compute-0 python3.9[192379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:15 compute-0 sudo[192369]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:15 compute-0 sudo[192423]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:11:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:11:15 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:11:15 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:11:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:11:15 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:11:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:11:15 compute-0 sudo[192630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxtvseqsjcshbkkauhihuavthgzcyeqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847075.1099002-360-25107214647857/AnsiballZ_systemd.py'
Jan 31 08:11:15 compute-0 sudo[192630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:15 compute-0 sudo[192632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:15 compute-0 sudo[192632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:15 compute-0 sudo[192632]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:15 compute-0 sudo[192658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:11:15 compute-0 sudo[192658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:15 compute-0 python3.9[192635]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:15 compute-0 systemd[1]: Reloading.
Jan 31 08:11:15 compute-0 podman[192695]: 2026-01-31 08:11:15.705874528 +0000 UTC m=+0.103160639 container create ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wiles, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:11:15 compute-0 podman[192695]: 2026-01-31 08:11:15.624156549 +0000 UTC m=+0.021442680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:11:15 compute-0 systemd-sysv-generator[192743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:15 compute-0 systemd-rc-local-generator[192735]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:11:15 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:11:15 compute-0 systemd[1]: Started libpod-conmon-ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6.scope.
Jan 31 08:11:15 compute-0 sudo[192630]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:16 compute-0 podman[192695]: 2026-01-31 08:11:16.034721374 +0000 UTC m=+0.432007505 container init ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wiles, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:11:16 compute-0 podman[192695]: 2026-01-31 08:11:16.062789297 +0000 UTC m=+0.460075418 container start ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:11:16 compute-0 systemd[1]: libpod-ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6.scope: Deactivated successfully.
Jan 31 08:11:16 compute-0 quizzical_wiles[192750]: 167 167
Jan 31 08:11:16 compute-0 conmon[192750]: conmon ff23e7b545314472d42f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6.scope/container/memory.events
Jan 31 08:11:16 compute-0 podman[192695]: 2026-01-31 08:11:16.078372345 +0000 UTC m=+0.475658506 container attach ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:16 compute-0 podman[192695]: 2026-01-31 08:11:16.079489485 +0000 UTC m=+0.476775616 container died ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-66ca01559a02b7df10ec296434ceb5401d06ce732cdf6ada32c88b6bb6228a45-merged.mount: Deactivated successfully.
Jan 31 08:11:16 compute-0 podman[192695]: 2026-01-31 08:11:16.216041052 +0000 UTC m=+0.613327173 container remove ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:11:16 compute-0 systemd[1]: libpod-conmon-ff23e7b545314472d42f36a8848af2bf3f8c1d0e7a02025f960fa8200dfed3f6.scope: Deactivated successfully.
Jan 31 08:11:16 compute-0 podman[192875]: 2026-01-31 08:11:16.373287138 +0000 UTC m=+0.060549217 container create cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:11:16 compute-0 systemd[1]: Started libpod-conmon-cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9.scope.
Jan 31 08:11:16 compute-0 podman[192875]: 2026-01-31 08:11:16.333981997 +0000 UTC m=+0.021244076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:11:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:16 compute-0 sudo[192944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooatqkdsroouhghshwaujussdtmosrxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847076.1647086-396-264994274037119/AnsiballZ_systemd.py'
Jan 31 08:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f8a39b5276b37aa4f87d466482f74b197fc8b127d1812d46b957a3a5da32cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f8a39b5276b37aa4f87d466482f74b197fc8b127d1812d46b957a3a5da32cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f8a39b5276b37aa4f87d466482f74b197fc8b127d1812d46b957a3a5da32cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f8a39b5276b37aa4f87d466482f74b197fc8b127d1812d46b957a3a5da32cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f8a39b5276b37aa4f87d466482f74b197fc8b127d1812d46b957a3a5da32cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:16 compute-0 sudo[192944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:16 compute-0 podman[192875]: 2026-01-31 08:11:16.48787817 +0000 UTC m=+0.175140259 container init cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:11:16 compute-0 podman[192875]: 2026-01-31 08:11:16.495417148 +0000 UTC m=+0.182679227 container start cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:11:16 compute-0 podman[192875]: 2026-01-31 08:11:16.506284877 +0000 UTC m=+0.193546956 container attach cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:11:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:16 compute-0 python3.9[192947]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:11:16 compute-0 systemd[1]: Reloading.
Jan 31 08:11:16 compute-0 determined_napier[192939]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:11:16 compute-0 determined_napier[192939]: --> All data devices are unavailable
Jan 31 08:11:16 compute-0 systemd-rc-local-generator[192991]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:11:16 compute-0 systemd-sysv-generator[192995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:11:16 compute-0 podman[192875]: 2026-01-31 08:11:16.934927328 +0000 UTC m=+0.622189497 container died cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:11:17 compute-0 systemd[1]: libpod-cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9.scope: Deactivated successfully.
Jan 31 08:11:17 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 08:11:17 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 08:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f8a39b5276b37aa4f87d466482f74b197fc8b127d1812d46b957a3a5da32cd-merged.mount: Deactivated successfully.
Jan 31 08:11:17 compute-0 sudo[192944]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:17 compute-0 podman[192875]: 2026-01-31 08:11:17.163521898 +0000 UTC m=+0.850783937 container remove cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:11:17 compute-0 systemd[1]: libpod-conmon-cb39207001dfcf450b9094021bfacb2ce1562b73c237ecdd9946bdcd4f0874b9.scope: Deactivated successfully.
Jan 31 08:11:17 compute-0 sudo[192658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:17 compute-0 sudo[193031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:17 compute-0 sudo[193031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:17 compute-0 sudo[193031]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:17 compute-0 sudo[193067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:11:17 compute-0 sudo[193067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:17 compute-0 ceph-mon[75294]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:17 compute-0 podman[193179]: 2026-01-31 08:11:17.501284389 +0000 UTC m=+0.030844009 container create 4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:11:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:17 compute-0 systemd[1]: Started libpod-conmon-4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf.scope.
Jan 31 08:11:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:17 compute-0 sudo[193249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbijafgjkwlckxpxfdprohidjzphqkik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847077.3358903-404-137955107721202/AnsiballZ_systemd.py'
Jan 31 08:11:17 compute-0 sudo[193249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:17 compute-0 podman[193179]: 2026-01-31 08:11:17.552795816 +0000 UTC m=+0.082355486 container init 4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:11:17 compute-0 podman[193179]: 2026-01-31 08:11:17.556761586 +0000 UTC m=+0.086321216 container start 4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feynman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:11:17 compute-0 podman[193179]: 2026-01-31 08:11:17.560115658 +0000 UTC m=+0.089675278 container attach 4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:11:17 compute-0 funny_feynman[193240]: 167 167
Jan 31 08:11:17 compute-0 systemd[1]: libpod-4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf.scope: Deactivated successfully.
Jan 31 08:11:17 compute-0 podman[193179]: 2026-01-31 08:11:17.561977889 +0000 UTC m=+0.091537509 container died 4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:11:17 compute-0 podman[193179]: 2026-01-31 08:11:17.48677449 +0000 UTC m=+0.016334130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4613fb00b54a8ce7c289728917b88ce67a087806faa52c71976cbb6654912542-merged.mount: Deactivated successfully.
Jan 31 08:11:17 compute-0 podman[193179]: 2026-01-31 08:11:17.613615699 +0000 UTC m=+0.143175329 container remove 4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:11:17 compute-0 systemd[1]: libpod-conmon-4d6aa3f9f2dd8147ff9b3e2feb0509f32e9e00ebc7e8e7dea89b6d09a22d78bf.scope: Deactivated successfully.
Jan 31 08:11:17 compute-0 podman[193273]: 2026-01-31 08:11:17.7328886 +0000 UTC m=+0.046576782 container create 92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:11:17 compute-0 systemd[1]: Started libpod-conmon-92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492.scope.
Jan 31 08:11:17 compute-0 podman[193273]: 2026-01-31 08:11:17.706817654 +0000 UTC m=+0.020505856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:11:17 compute-0 python3.9[193252]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e802198a2deb9387892043d314f410b70094278d6359a2add7692781fc8b28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e802198a2deb9387892043d314f410b70094278d6359a2add7692781fc8b28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e802198a2deb9387892043d314f410b70094278d6359a2add7692781fc8b28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e802198a2deb9387892043d314f410b70094278d6359a2add7692781fc8b28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:17 compute-0 podman[193273]: 2026-01-31 08:11:17.848053349 +0000 UTC m=+0.161741561 container init 92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_liskov, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:11:17 compute-0 podman[193273]: 2026-01-31 08:11:17.855267927 +0000 UTC m=+0.168956109 container start 92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_liskov, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:11:17 compute-0 podman[193273]: 2026-01-31 08:11:17.863170614 +0000 UTC m=+0.176858796 container attach 92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:11:17 compute-0 sudo[193249]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:18 compute-0 stoic_liskov[193290]: {
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:     "0": [
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:         {
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "devices": [
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "/dev/loop3"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             ],
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_name": "ceph_lv0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_size": "21470642176",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "name": "ceph_lv0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "tags": {
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cluster_name": "ceph",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.crush_device_class": "",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.encrypted": "0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.objectstore": "bluestore",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osd_id": "0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.type": "block",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.vdo": "0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.with_tpm": "0"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             },
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "type": "block",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "vg_name": "ceph_vg0"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:         }
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:     ],
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:     "1": [
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:         {
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "devices": [
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "/dev/loop4"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             ],
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_name": "ceph_lv1",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_size": "21470642176",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "name": "ceph_lv1",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "tags": {
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cluster_name": "ceph",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.crush_device_class": "",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.encrypted": "0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.objectstore": "bluestore",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osd_id": "1",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.type": "block",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.vdo": "0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.with_tpm": "0"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             },
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "type": "block",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "vg_name": "ceph_vg1"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:         }
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:     ],
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:     "2": [
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:         {
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "devices": [
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "/dev/loop5"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             ],
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_name": "ceph_lv2",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_size": "21470642176",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "name": "ceph_lv2",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "tags": {
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.cluster_name": "ceph",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.crush_device_class": "",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.encrypted": "0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.objectstore": "bluestore",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osd_id": "2",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.type": "block",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.vdo": "0",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:                 "ceph.with_tpm": "0"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             },
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "type": "block",
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:             "vg_name": "ceph_vg2"
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:         }
Jan 31 08:11:18 compute-0 stoic_liskov[193290]:     ]
Jan 31 08:11:18 compute-0 stoic_liskov[193290]: }
Jan 31 08:11:18 compute-0 systemd[1]: libpod-92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492.scope: Deactivated successfully.
Jan 31 08:11:18 compute-0 podman[193273]: 2026-01-31 08:11:18.141670036 +0000 UTC m=+0.455358218 container died 92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_liskov, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e802198a2deb9387892043d314f410b70094278d6359a2add7692781fc8b28-merged.mount: Deactivated successfully.
Jan 31 08:11:18 compute-0 podman[193273]: 2026-01-31 08:11:18.198839349 +0000 UTC m=+0.512527541 container remove 92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_liskov, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:11:18 compute-0 systemd[1]: libpod-conmon-92be2a78d32a7812f7df7c64d5c306a8156aa5c97b4c6b19afdff73939b96492.scope: Deactivated successfully.
Jan 31 08:11:18 compute-0 sudo[193067]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:18 compute-0 podman[193378]: 2026-01-31 08:11:18.294542111 +0000 UTC m=+0.057504443 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:18 compute-0 sudo[193399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:18 compute-0 sudo[193399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:18 compute-0 sudo[193399]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:18 compute-0 sudo[193457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:11:18 compute-0 sudo[193457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:18 compute-0 sudo[193532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwyjomzkgbbuiwevgnveqpugdhzmxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847078.176693-404-70694699415845/AnsiballZ_systemd.py'
Jan 31 08:11:18 compute-0 sudo[193532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:18 compute-0 podman[193547]: 2026-01-31 08:11:18.642539935 +0000 UTC m=+0.051090896 container create b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 31 08:11:18 compute-0 systemd[1]: Started libpod-conmon-b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc.scope.
Jan 31 08:11:18 compute-0 podman[193547]: 2026-01-31 08:11:18.618088553 +0000 UTC m=+0.026639534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:11:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:18 compute-0 python3.9[193534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:18 compute-0 podman[193547]: 2026-01-31 08:11:18.745781035 +0000 UTC m=+0.154332026 container init b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:11:18 compute-0 podman[193547]: 2026-01-31 08:11:18.754561196 +0000 UTC m=+0.163112187 container start b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:11:18 compute-0 goofy_goldwasser[193564]: 167 167
Jan 31 08:11:18 compute-0 systemd[1]: libpod-b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc.scope: Deactivated successfully.
Jan 31 08:11:18 compute-0 conmon[193564]: conmon b089289c317ca02c5f4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc.scope/container/memory.events
Jan 31 08:11:18 compute-0 podman[193547]: 2026-01-31 08:11:18.763227475 +0000 UTC m=+0.171778436 container attach b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:11:18 compute-0 podman[193547]: 2026-01-31 08:11:18.76378078 +0000 UTC m=+0.172331761 container died b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:11:18 compute-0 sudo[193532]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a080385b1db503c70f7c960738357693748f1e0cd95e128a719494efe7fac88-merged.mount: Deactivated successfully.
Jan 31 08:11:18 compute-0 podman[193547]: 2026-01-31 08:11:18.951853984 +0000 UTC m=+0.360404945 container remove b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:11:18 compute-0 systemd[1]: libpod-conmon-b089289c317ca02c5f4b082304750a89a5d6fbfa6bdd97ae3ce38657c3525bfc.scope: Deactivated successfully.
Jan 31 08:11:19 compute-0 podman[193668]: 2026-01-31 08:11:19.139077415 +0000 UTC m=+0.100153387 container create d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_maxwell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:11:19 compute-0 podman[193668]: 2026-01-31 08:11:19.068455941 +0000 UTC m=+0.029531933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:11:19 compute-0 systemd[1]: Started libpod-conmon-d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483.scope.
Jan 31 08:11:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:19 compute-0 sudo[193760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjoaoznmeapwlxoiinbdnovvbxipvyae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847078.93749-404-222067329661928/AnsiballZ_systemd.py'
Jan 31 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e712aa0c5542aadcd33433037bedb6798f98a5f5f2154fbe9de906115d8ffe9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e712aa0c5542aadcd33433037bedb6798f98a5f5f2154fbe9de906115d8ffe9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:19 compute-0 sudo[193760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e712aa0c5542aadcd33433037bedb6798f98a5f5f2154fbe9de906115d8ffe9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e712aa0c5542aadcd33433037bedb6798f98a5f5f2154fbe9de906115d8ffe9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:19 compute-0 podman[193668]: 2026-01-31 08:11:19.21960724 +0000 UTC m=+0.180683242 container init d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:11:19 compute-0 podman[193668]: 2026-01-31 08:11:19.227071515 +0000 UTC m=+0.188147487 container start d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_maxwell, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:11:19 compute-0 podman[193668]: 2026-01-31 08:11:19.233729888 +0000 UTC m=+0.194805860 container attach d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:11:19 compute-0 ceph-mon[75294]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:19 compute-0 python3.9[193762]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:19 compute-0 sudo[193760]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:19 compute-0 lvm[193989]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:11:19 compute-0 lvm[193984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:11:19 compute-0 lvm[193989]: VG ceph_vg1 finished
Jan 31 08:11:19 compute-0 lvm[193984]: VG ceph_vg0 finished
Jan 31 08:11:19 compute-0 sudo[193992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzrevsmpbigtiseivrgqfxzeowvsavra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847079.6096168-404-59601670819973/AnsiballZ_systemd.py'
Jan 31 08:11:19 compute-0 lvm[193994]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:11:19 compute-0 lvm[193994]: VG ceph_vg2 finished
Jan 31 08:11:19 compute-0 sudo[193992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:19 compute-0 adoring_maxwell[193754]: {}
Jan 31 08:11:19 compute-0 systemd[1]: libpod-d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483.scope: Deactivated successfully.
Jan 31 08:11:19 compute-0 systemd[1]: libpod-d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483.scope: Consumed 1.009s CPU time.
Jan 31 08:11:19 compute-0 conmon[193754]: conmon d80f323bbdec8436dacf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483.scope/container/memory.events
Jan 31 08:11:19 compute-0 podman[193668]: 2026-01-31 08:11:19.975312279 +0000 UTC m=+0.936388261 container died d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:20 compute-0 python3.9[193996]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e712aa0c5542aadcd33433037bedb6798f98a5f5f2154fbe9de906115d8ffe9c-merged.mount: Deactivated successfully.
Jan 31 08:11:20 compute-0 sudo[193992]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:20 compute-0 podman[193668]: 2026-01-31 08:11:20.2083637 +0000 UTC m=+1.169439672 container remove d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_maxwell, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:11:20 compute-0 systemd[1]: libpod-conmon-d80f323bbdec8436dacfe4ff430d35d3e377ed1962d5a7f1f766961994383483.scope: Deactivated successfully.
Jan 31 08:11:20 compute-0 sudo[193457]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:11:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:11:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:20 compute-0 sudo[194061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:11:20 compute-0 sudo[194061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:20 compute-0 sudo[194061]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:20 compute-0 sudo[194188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjchtyocrqofeosrsdmzkpxpphvqdcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847080.2976434-404-132560578670922/AnsiballZ_systemd.py'
Jan 31 08:11:20 compute-0 sudo[194188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:20 compute-0 python3.9[194190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:20 compute-0 sudo[194188]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:21 compute-0 sudo[194343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weymkfjxvxwtcsovmyjcwzhrkzkektdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847080.9753873-404-217190679376051/AnsiballZ_systemd.py'
Jan 31 08:11:21 compute-0 sudo[194343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:21 compute-0 ceph-mon[75294]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:11:21 compute-0 python3.9[194345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:21 compute-0 sudo[194343]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:21 compute-0 sudo[194498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-degrqgdexuwsaimdoyhbjpnxhvlpqlad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847081.7007182-404-59733447665930/AnsiballZ_systemd.py'
Jan 31 08:11:21 compute-0 sudo[194498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:22 compute-0 python3.9[194500]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:22 compute-0 sudo[194498]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-0 sudo[194653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmtohkznvbbfrarlqbvrzlghmhsfusmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847082.3935223-404-34794616919743/AnsiballZ_systemd.py'
Jan 31 08:11:22 compute-0 sudo[194653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:22 compute-0 python3.9[194655]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:22 compute-0 sudo[194653]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:23 compute-0 ceph-mon[75294]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:23 compute-0 sudo[194808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgydwhpnqsavaemzwnbydysenykxynwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847083.058145-404-106309238270303/AnsiballZ_systemd.py'
Jan 31 08:11:23 compute-0 sudo[194808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:23 compute-0 python3.9[194810]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:23 compute-0 sudo[194808]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 sudo[194963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbljtodwwoqylllayotbebanjuiiylox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847083.7840173-404-159328080112300/AnsiballZ_systemd.py'
Jan 31 08:11:24 compute-0 sudo[194963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:24 compute-0 python3.9[194965]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:24 compute-0 sudo[194963]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 sudo[195118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjvneyjjywhicunlsnurbyncrbferpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847084.5049183-404-144607292043153/AnsiballZ_systemd.py'
Jan 31 08:11:24 compute-0 sudo[195118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:25 compute-0 python3.9[195120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:25 compute-0 sudo[195118]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:25 compute-0 ceph-mon[75294]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:25 compute-0 sudo[195273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piifnwembudsvccmxiqendffjxgzxpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847085.1775603-404-258819848080421/AnsiballZ_systemd.py'
Jan 31 08:11:25 compute-0 sudo[195273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:25 compute-0 python3.9[195275]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:25 compute-0 sudo[195273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 sudo[195428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huqulresyzppyvqjyoxyqjykuunocarl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847085.8020544-404-160604116239022/AnsiballZ_systemd.py'
Jan 31 08:11:26 compute-0 sudo[195428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:26 compute-0 python3.9[195430]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:26 compute-0 sudo[195428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:26 compute-0 sudo[195583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucedciqtciiwtrldwhbtqukagrdhikyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847086.5000217-404-99859942118559/AnsiballZ_systemd.py'
Jan 31 08:11:26 compute-0 sudo[195583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:26 compute-0 python3.9[195585]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:11:27 compute-0 sudo[195583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:27 compute-0 ceph-mon[75294]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:27 compute-0 sudo[195738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxomufszawjzstweophesldszutymbbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847087.3718212-506-36559365682057/AnsiballZ_file.py'
Jan 31 08:11:27 compute-0 sudo[195738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:27 compute-0 python3.9[195740]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:27 compute-0 sudo[195738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-0 sudo[195890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riyaftyvhfnepdsqsmygvwnjnqmfalrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847087.9193282-506-223022968265854/AnsiballZ_file.py'
Jan 31 08:11:28 compute-0 sudo[195890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:28 compute-0 python3.9[195892]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:28 compute-0 sudo[195890]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-0 sudo[196042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhgqgectlqhxybzlwfrenjzsgrrnsvgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847088.441599-506-157655937763913/AnsiballZ_file.py'
Jan 31 08:11:28 compute-0 sudo[196042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:28 compute-0 python3.9[196044]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:28 compute-0 sudo[196042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:29 compute-0 ceph-mon[75294]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:29 compute-0 sudo[196194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxpuzxpqbilyttxvdruvrqmpcnzqnca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847088.9554896-506-125210080856297/AnsiballZ_file.py'
Jan 31 08:11:29 compute-0 sudo[196194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:29 compute-0 python3.9[196196]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:29 compute-0 sudo[196194]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:29 compute-0 sudo[196346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aylrfigpdjcyysvvvbjrjosgzxmmsxdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847089.7154255-506-55746129189569/AnsiballZ_file.py'
Jan 31 08:11:29 compute-0 sudo[196346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:30 compute-0 python3.9[196348]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:30 compute-0 sudo[196346]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:30 compute-0 sudo[196498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxhaofjvpdslpnmlubdjqxrjfhhwtmle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847090.2937758-506-247797781918172/AnsiballZ_file.py'
Jan 31 08:11:30 compute-0 sudo[196498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:30 compute-0 python3.9[196500]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:30 compute-0 sudo[196498]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:31 compute-0 ceph-mon[75294]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:31 compute-0 python3.9[196650]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:11:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:31 compute-0 sudo[196800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tswxpkrzlpvjojhjaqvjillwlrhrdugb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847091.5192914-557-96556679522501/AnsiballZ_stat.py'
Jan 31 08:11:31 compute-0 sudo[196800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:32 compute-0 python3.9[196802]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:32 compute-0 sudo[196800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:32 compute-0 sudo[196925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dewxaxvipvmphzbakoktjaybrpwrpyhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847091.5192914-557-96556679522501/AnsiballZ_copy.py'
Jan 31 08:11:32 compute-0 sudo[196925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:32 compute-0 python3.9[196927]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847091.5192914-557-96556679522501/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:32 compute-0 sudo[196925]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:33 compute-0 sudo[197077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lurlkocbefjpwofemijtqfxenvkdcxow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847092.8541632-557-64718446901615/AnsiballZ_stat.py'
Jan 31 08:11:33 compute-0 sudo[197077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:33 compute-0 python3.9[197079]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:33 compute-0 sudo[197077]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:33 compute-0 ceph-mon[75294]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:33 compute-0 sudo[197202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmtynzkslloyjmelujobmsefmbkmahld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847092.8541632-557-64718446901615/AnsiballZ_copy.py'
Jan 31 08:11:33 compute-0 sudo[197202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:33 compute-0 python3.9[197204]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847092.8541632-557-64718446901615/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:33 compute-0 sudo[197202]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:34 compute-0 sudo[197354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdoecsjbrhxottufghpyhjfmjuryknty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847094.0059483-557-40529326420246/AnsiballZ_stat.py'
Jan 31 08:11:34 compute-0 sudo[197354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:34 compute-0 python3.9[197356]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:34 compute-0 sudo[197354]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:34 compute-0 sudo[197479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqljpajjhkwowhaccygzejyktiatlmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847094.0059483-557-40529326420246/AnsiballZ_copy.py'
Jan 31 08:11:34 compute-0 sudo[197479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:35 compute-0 python3.9[197481]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847094.0059483-557-40529326420246/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:35 compute-0 sudo[197479]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:35 compute-0 ceph-mon[75294]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:35 compute-0 sudo[197631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bctsggacmgjpdxqzgvvcdolhymvcwdht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847095.1636624-557-183545415310027/AnsiballZ_stat.py'
Jan 31 08:11:35 compute-0 sudo[197631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:35 compute-0 python3.9[197633]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:35 compute-0 sudo[197631]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:35 compute-0 sudo[197756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytzlolyqdceqjymqvuqjkdisixydgbkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847095.1636624-557-183545415310027/AnsiballZ_copy.py'
Jan 31 08:11:35 compute-0 sudo[197756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:36 compute-0 python3.9[197758]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847095.1636624-557-183545415310027/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:36 compute-0 sudo[197756]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:36 compute-0 sudo[197908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkxdxnqxfpgdiqvqboifotwxtkdgtxed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847096.2923446-557-140483878755485/AnsiballZ_stat.py'
Jan 31 08:11:36 compute-0 sudo[197908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:36 compute-0 python3.9[197910]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:36 compute-0 sudo[197908]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:37 compute-0 sudo[198033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfufdwjjurbkryuejfqjuukfjqujtful ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847096.2923446-557-140483878755485/AnsiballZ_copy.py'
Jan 31 08:11:37 compute-0 sudo[198033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:37 compute-0 python3.9[198035]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847096.2923446-557-140483878755485/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:37 compute-0 sudo[198033]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:37 compute-0 ceph-mon[75294]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:37 compute-0 sudo[198185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlpjnswkqhfatclursateeluytdhsbaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847097.4424453-557-107144548878569/AnsiballZ_stat.py'
Jan 31 08:11:37 compute-0 sudo[198185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:37 compute-0 python3.9[198187]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:37 compute-0 sudo[198185]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:38 compute-0 sudo[198310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eipjlpplrmevthdxekkyyuqdpirtiruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847097.4424453-557-107144548878569/AnsiballZ_copy.py'
Jan 31 08:11:38 compute-0 sudo[198310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:38 compute-0 python3.9[198312]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847097.4424453-557-107144548878569/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:38 compute-0 sudo[198310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:38 compute-0 ceph-mon[75294]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:38 compute-0 sudo[198462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhommgofndchffzontnlczlfoltukgdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847098.4936562-557-163661158886468/AnsiballZ_stat.py'
Jan 31 08:11:38 compute-0 sudo[198462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:38 compute-0 python3.9[198464]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:38 compute-0 sudo[198462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:39 compute-0 sudo[198585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amdpqunowgsrcuvntqvrzfhadktyasxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847098.4936562-557-163661158886468/AnsiballZ_copy.py'
Jan 31 08:11:39 compute-0 sudo[198585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:39 compute-0 python3.9[198587]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847098.4936562-557-163661158886468/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:39 compute-0 sudo[198585]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:39 compute-0 sudo[198737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqalqbdsufgyulukjpzrifbdvlryducf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847099.619558-557-39843805046468/AnsiballZ_stat.py'
Jan 31 08:11:39 compute-0 sudo[198737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:40 compute-0 python3.9[198739]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:40 compute-0 sudo[198737]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:40 compute-0 sudo[198862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwyflnibgfqvdxtlssliqqhhodqafjxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847099.619558-557-39843805046468/AnsiballZ_copy.py'
Jan 31 08:11:40 compute-0 sudo[198862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:40 compute-0 python3.9[198864]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847099.619558-557-39843805046468/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:40 compute-0 sudo[198862]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:40 compute-0 ceph-mon[75294]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:40 compute-0 sudo[199014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gghmmjmcngxsxptwurbnftxaopjtsubs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847100.6971035-670-279673443875846/AnsiballZ_command.py'
Jan 31 08:11:40 compute-0 sudo[199014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:41 compute-0 python3.9[199016]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 08:11:41 compute-0 sudo[199014]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:41 compute-0 sudo[199167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boobrsaaiiaafmgwzsmvniadxdaawrdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847101.386996-679-141914648703910/AnsiballZ_file.py'
Jan 31 08:11:41 compute-0 sudo[199167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:41 compute-0 python3.9[199169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:41 compute-0 sudo[199167]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:42 compute-0 sudo[199319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuouqbtqtxvedfmtgadqbjilfoouryik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847102.077417-679-411192519824/AnsiballZ_file.py'
Jan 31 08:11:42 compute-0 sudo[199319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:42 compute-0 python3.9[199321]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:42 compute-0 sudo[199319]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:42 compute-0 ceph-mon[75294]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:42 compute-0 sudo[199482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yffaefilipbalpcaiosatfdbwrdwxqmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847102.6246226-679-128175084886424/AnsiballZ_file.py'
Jan 31 08:11:42 compute-0 sudo[199482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:42 compute-0 podman[199445]: 2026-01-31 08:11:42.906491849 +0000 UTC m=+0.089855313 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:43 compute-0 python3.9[199489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:43 compute-0 sudo[199482]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:43 compute-0 sudo[199648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbahawktnemiyhfcokvmiyjokcjvxhff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847103.2182448-679-11214941598357/AnsiballZ_file.py'
Jan 31 08:11:43 compute-0 sudo[199648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:43 compute-0 python3.9[199650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:43 compute-0 sudo[199648]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:43 compute-0 sudo[199800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpicgdhyxlloianrjxyqijdhvuuzrrsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847103.7425869-679-170193727309699/AnsiballZ_file.py'
Jan 31 08:11:43 compute-0 sudo[199800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:44 compute-0 python3.9[199802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:44 compute-0 sudo[199800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:44 compute-0 sudo[199952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpnkqgozqrtcfoeggbeszxmkydicxzji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847104.2885356-679-259977745182775/AnsiballZ_file.py'
Jan 31 08:11:44 compute-0 sudo[199952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:44 compute-0 ceph-mon[75294]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:44 compute-0 python3.9[199954]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:44 compute-0 sudo[199952]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:45 compute-0 sudo[200104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbcwworgnktxplnzhputycxngstgzyoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847104.820801-679-57227480309663/AnsiballZ_file.py'
Jan 31 08:11:45 compute-0 sudo[200104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:45 compute-0 python3.9[200106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:45 compute-0 sudo[200104]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:45 compute-0 sudo[200256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgcadmtrmzybmkmfrgpszlrfdrgtbwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847105.363547-679-111434982401094/AnsiballZ_file.py'
Jan 31 08:11:45 compute-0 sudo[200256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:45 compute-0 python3.9[200258]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:45 compute-0 sudo[200256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:46 compute-0 sudo[200408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrnpdifxlahxoqspmuchhcoldzzjiohm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847105.8838983-679-212381722260081/AnsiballZ_file.py'
Jan 31 08:11:46 compute-0 sudo[200408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:46 compute-0 python3.9[200410]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:46 compute-0 sudo[200408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:46 compute-0 ceph-mon[75294]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:46 compute-0 sudo[200560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aisrbddlzkdzvlufnvxxnwrbenunxmqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847106.5662558-679-1754885058451/AnsiballZ_file.py'
Jan 31 08:11:46 compute-0 sudo[200560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:11:46.953 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:11:46.953 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:46 compute-0 python3.9[200562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:11:46.953 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:46 compute-0 sudo[200560]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:47 compute-0 sudo[200712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yesdmjmdugkltdptdyawcbdiiczdhqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847107.0674534-679-82041863381941/AnsiballZ_file.py'
Jan 31 08:11:47 compute-0 sudo[200712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:47 compute-0 python3.9[200714]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:47 compute-0 sudo[200712]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:47 compute-0 sudo[200864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbvmbusenysoecztbkrhfzwzlxphfbof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847107.6303613-679-15101492278307/AnsiballZ_file.py'
Jan 31 08:11:47 compute-0 sudo[200864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:48 compute-0 python3.9[200866]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:48 compute-0 sudo[200864]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:48 compute-0 sudo[201031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaverkuslflbctkokkgapacuktpitotz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847108.1866791-679-20773872977660/AnsiballZ_file.py'
Jan 31 08:11:48 compute-0 sudo[201031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:48 compute-0 podman[200990]: 2026-01-31 08:11:48.426629307 +0000 UTC m=+0.052731172 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:11:48 compute-0 python3.9[201037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:48 compute-0 sudo[201031]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:48 compute-0 ceph-mon[75294]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:48 compute-0 sudo[201187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvjrlhpxjayhlyafrfivuqnhakgajixx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847108.7486432-679-261461318997705/AnsiballZ_file.py'
Jan 31 08:11:48 compute-0 sudo[201187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:49 compute-0 python3.9[201189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:49 compute-0 sudo[201187]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:49 compute-0 sudo[201339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owiaznwzadsjfyeqixtlpdgrfimvpvqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847109.3113692-778-259837376079987/AnsiballZ_stat.py'
Jan 31 08:11:49 compute-0 sudo[201339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:49 compute-0 python3.9[201341]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:49 compute-0 sudo[201339]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:49 compute-0 sudo[201462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osldiplruleejmprdzvpkpbnmnkayqyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847109.3113692-778-259837376079987/AnsiballZ_copy.py'
Jan 31 08:11:49 compute-0 sudo[201462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:50 compute-0 python3.9[201464]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847109.3113692-778-259837376079987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:50 compute-0 sudo[201462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:50 compute-0 sudo[201614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqyrkxqvbycsolpauvioxekhynuwouq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847110.2824872-778-30781909950497/AnsiballZ_stat.py'
Jan 31 08:11:50 compute-0 sudo[201614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:50 compute-0 ceph-mon[75294]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:50 compute-0 python3.9[201616]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:50 compute-0 sudo[201614]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:11:50
Jan 31 08:11:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:11:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:11:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', '.mgr']
Jan 31 08:11:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:11:51 compute-0 sudo[201737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nranusfcuhhceapyihyalfgakbvbzsoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847110.2824872-778-30781909950497/AnsiballZ_copy.py'
Jan 31 08:11:51 compute-0 sudo[201737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:51 compute-0 python3.9[201739]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847110.2824872-778-30781909950497/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:51 compute-0 sudo[201737]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:51 compute-0 sudo[201889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqipzkcloiatfedjeukntegzwslskzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847111.4250484-778-270668791762872/AnsiballZ_stat.py'
Jan 31 08:11:51 compute-0 sudo[201889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:51 compute-0 python3.9[201891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:51 compute-0 sudo[201889]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:52 compute-0 sudo[202012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcrzezjumyafbxkibpglwoqddarsjeis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847111.4250484-778-270668791762872/AnsiballZ_copy.py'
Jan 31 08:11:52 compute-0 sudo[202012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:52 compute-0 python3.9[202014]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847111.4250484-778-270668791762872/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:52 compute-0 sudo[202012]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:52 compute-0 sudo[202164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drdaxihzyiaqlcenevgfgmixvadmgfue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847112.4065082-778-197799397399602/AnsiballZ_stat.py'
Jan 31 08:11:52 compute-0 sudo[202164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:52 compute-0 ceph-mon[75294]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:52 compute-0 python3.9[202166]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:52 compute-0 sudo[202164]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:53 compute-0 sudo[202287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhxoppxvzkvdvuhemskieldgbelghfkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847112.4065082-778-197799397399602/AnsiballZ_copy.py'
Jan 31 08:11:53 compute-0 sudo[202287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:53 compute-0 python3.9[202289]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847112.4065082-778-197799397399602/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:53 compute-0 sudo[202287]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:53 compute-0 sudo[202439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyuzaellypniljgnsaehgudeeehgpnan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847113.384317-778-76479565213820/AnsiballZ_stat.py'
Jan 31 08:11:53 compute-0 sudo[202439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:53 compute-0 python3.9[202441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:53 compute-0 sudo[202439]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:54 compute-0 sudo[202562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbbxvrgvpwteqqknfhxwksgllfsejajc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847113.384317-778-76479565213820/AnsiballZ_copy.py'
Jan 31 08:11:54 compute-0 sudo[202562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:54 compute-0 python3.9[202564]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847113.384317-778-76479565213820/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:54 compute-0 sudo[202562]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:54 compute-0 sudo[202714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owqfcdgjkdhjonqvvwowgrtogxagwxsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847114.359625-778-98944951111492/AnsiballZ_stat.py'
Jan 31 08:11:54 compute-0 sudo[202714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:54 compute-0 ceph-mon[75294]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:54 compute-0 python3.9[202716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:54 compute-0 sudo[202714]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:55 compute-0 sudo[202837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnaryfhnafulrijtojotgbnyeywaaalq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847114.359625-778-98944951111492/AnsiballZ_copy.py'
Jan 31 08:11:55 compute-0 sudo[202837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:55 compute-0 python3.9[202839]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847114.359625-778-98944951111492/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:55 compute-0 sudo[202837]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:11:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:11:55 compute-0 sudo[202989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirxyzwjkvitludsrqjmzcleqvdtlxlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847115.4343846-778-219462612878399/AnsiballZ_stat.py'
Jan 31 08:11:55 compute-0 sudo[202989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:55 compute-0 python3.9[202991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:55 compute-0 sudo[202989]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:56 compute-0 sudo[203112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywylgappiefnnpncxknebkejpziygyeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847115.4343846-778-219462612878399/AnsiballZ_copy.py'
Jan 31 08:11:56 compute-0 sudo[203112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:56 compute-0 python3.9[203114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847115.4343846-778-219462612878399/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:56 compute-0 sudo[203112]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:56 compute-0 ceph-mon[75294]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:56 compute-0 sudo[203264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tswlucvqkwlfcgssznrgtgnmdihjftpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847116.5792863-778-246315726983712/AnsiballZ_stat.py'
Jan 31 08:11:56 compute-0 sudo[203264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:57 compute-0 python3.9[203266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:57 compute-0 sudo[203264]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:57 compute-0 auditd[699]: Audit daemon rotating log files
Jan 31 08:11:57 compute-0 sudo[203387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lorgmqhmzvzuxhfrgsmketpuifpvehsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847116.5792863-778-246315726983712/AnsiballZ_copy.py'
Jan 31 08:11:57 compute-0 sudo[203387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:57 compute-0 python3.9[203389]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847116.5792863-778-246315726983712/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:57 compute-0 sudo[203387]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:58 compute-0 sudo[203539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvhvogopirkwremsczmicsstxpmdgpbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847117.7699645-778-135258106324250/AnsiballZ_stat.py'
Jan 31 08:11:58 compute-0 sudo[203539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:58 compute-0 python3.9[203541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:58 compute-0 sudo[203539]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:58 compute-0 sudo[203662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugnxjuiwqmndqeakrxkikzkjxnvjkjwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847117.7699645-778-135258106324250/AnsiballZ_copy.py'
Jan 31 08:11:58 compute-0 sudo[203662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:58 compute-0 python3.9[203664]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847117.7699645-778-135258106324250/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:58 compute-0 sudo[203662]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:58 compute-0 ceph-mon[75294]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:59 compute-0 sudo[203814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcltoaladhxbdlyxvbqfltrxhyubecy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847118.8678117-778-244979650125696/AnsiballZ_stat.py'
Jan 31 08:11:59 compute-0 sudo[203814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:59 compute-0 python3.9[203816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:59 compute-0 sudo[203814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:59 compute-0 sudo[203937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruusogwztmfvihnagvnmzcrvivdctfas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847118.8678117-778-244979650125696/AnsiballZ_copy.py'
Jan 31 08:11:59 compute-0 sudo[203937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:59 compute-0 python3.9[203939]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847118.8678117-778-244979650125696/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:59 compute-0 sudo[203937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:00 compute-0 sudo[204089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbjfaetdiskuwiecqtvtxpnedjjxbmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847120.0288463-778-273460284382981/AnsiballZ_stat.py'
Jan 31 08:12:00 compute-0 sudo[204089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:00 compute-0 python3.9[204091]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:00 compute-0 sudo[204089]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:00 compute-0 sudo[204212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-horgsrejgjolrxuygjdghkvngpjvszbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847120.0288463-778-273460284382981/AnsiballZ_copy.py'
Jan 31 08:12:00 compute-0 sudo[204212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:00 compute-0 ceph-mon[75294]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:01 compute-0 python3.9[204214]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847120.0288463-778-273460284382981/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:01 compute-0 sudo[204212]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:01 compute-0 sudo[204364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrrkwtiwlvuizirawxefkqghlygqwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847121.1391547-778-199866121616290/AnsiballZ_stat.py'
Jan 31 08:12:01 compute-0 sudo[204364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:01 compute-0 python3.9[204366]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:01 compute-0 sudo[204364]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:01 compute-0 sudo[204487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kivxopqjfbdbqmufrbdsncpimqpjirri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847121.1391547-778-199866121616290/AnsiballZ_copy.py'
Jan 31 08:12:01 compute-0 sudo[204487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:02 compute-0 python3.9[204489]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847121.1391547-778-199866121616290/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:02 compute-0 sudo[204487]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:02 compute-0 sudo[204639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwaijeztxntibyrdmessljfpvstqywwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847122.294288-778-217661554612403/AnsiballZ_stat.py'
Jan 31 08:12:02 compute-0 sudo[204639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:02 compute-0 python3.9[204641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:02 compute-0 sudo[204639]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:02 compute-0 ceph-mon[75294]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:03 compute-0 sudo[204762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtziepdotkwudogocwwdbitofsxasvxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847122.294288-778-217661554612403/AnsiballZ_copy.py'
Jan 31 08:12:03 compute-0 sudo[204762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:03 compute-0 python3.9[204764]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847122.294288-778-217661554612403/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:03 compute-0 sudo[204762]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:03 compute-0 sudo[204914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cducumcjjahrwecrlyejwamcvbixeuid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847123.3748107-778-200448903372437/AnsiballZ_stat.py'
Jan 31 08:12:03 compute-0 sudo[204914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:03 compute-0 python3.9[204916]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:03 compute-0 sudo[204914]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-0 sudo[205037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydtmtwibqmrmavcccpgoktddidhdtqir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847123.3748107-778-200448903372437/AnsiballZ_copy.py'
Jan 31 08:12:04 compute-0 sudo[205037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:04 compute-0 python3.9[205039]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847123.3748107-778-200448903372437/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:04 compute-0 sudo[205037]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-0 ceph-mon[75294]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:04 compute-0 python3.9[205189]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:05 compute-0 sudo[205342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phayuibnygmhivbmtjoibeagfnugqcpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847125.2027533-984-140447548305430/AnsiballZ_seboolean.py'
Jan 31 08:12:05 compute-0 sudo[205342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:05 compute-0 python3.9[205344]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:12:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:06 compute-0 sshd-session[205346]: Invalid user solana from 193.32.162.145 port 60350
Jan 31 08:12:07 compute-0 sshd-session[205346]: Connection closed by invalid user solana 193.32.162.145 port 60350 [preauth]
Jan 31 08:12:07 compute-0 ceph-mon[75294]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:07 compute-0 sudo[205342]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:07 compute-0 sudo[205500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqxcnjcyuijejbyqjpcmxqyenjagopw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847127.334655-992-252151571268375/AnsiballZ_copy.py'
Jan 31 08:12:07 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 08:12:07 compute-0 sudo[205500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:07 compute-0 python3.9[205502]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:07 compute-0 sudo[205500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:08 compute-0 sudo[205652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obhipkioqfcggvdesulsslujvaefvdnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847128.0949697-992-21747350254836/AnsiballZ_copy.py'
Jan 31 08:12:08 compute-0 sudo[205652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:08 compute-0 python3.9[205654]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:08 compute-0 sudo[205652]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:08 compute-0 sudo[205804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxwlygaphzujczaxtidpgcpffdmeucos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847128.7243657-992-161867791266104/AnsiballZ_copy.py'
Jan 31 08:12:08 compute-0 sudo[205804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:09 compute-0 ceph-mon[75294]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:09 compute-0 python3.9[205806]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:09 compute-0 sudo[205804]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:09 compute-0 sudo[205956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnonyifyyuafphlmaoealvdcybhtipec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847129.3551373-992-70716591188939/AnsiballZ_copy.py'
Jan 31 08:12:09 compute-0 sudo[205956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:09 compute-0 python3.9[205958]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:09 compute-0 sudo[205956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:10 compute-0 sudo[206108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hryccndcfiqhwrzecvmapjltmbpjigcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847129.912578-992-58271181709971/AnsiballZ_copy.py'
Jan 31 08:12:10 compute-0 sudo[206108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:10 compute-0 python3.9[206110]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:10 compute-0 sudo[206108]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:10 compute-0 sudo[206260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qysfmfwrjlyvqojdanzqvqcpdpnianlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847130.5015202-1028-94556830498726/AnsiballZ_copy.py'
Jan 31 08:12:10 compute-0 sudo[206260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:10 compute-0 python3.9[206262]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:10 compute-0 sudo[206260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:11 compute-0 ceph-mon[75294]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:11 compute-0 sudo[206412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsfohqahpddjxsioninqgthvmjdowfue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847131.0450768-1028-45786198088826/AnsiballZ_copy.py'
Jan 31 08:12:11 compute-0 sudo[206412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:11 compute-0 python3.9[206414]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:11 compute-0 sudo[206412]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:11 compute-0 sudo[206564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyipahcxtcklywtnmqziqnlyxkybkckd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847131.6078227-1028-39263849022216/AnsiballZ_copy.py'
Jan 31 08:12:11 compute-0 sudo[206564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:12 compute-0 python3.9[206566]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:12 compute-0 sudo[206564]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:12 compute-0 sudo[206716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvynclsregmphtpulvfuurlwcjjlmzfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847132.319054-1028-247792723107480/AnsiballZ_copy.py'
Jan 31 08:12:12 compute-0 sudo[206716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:12 compute-0 python3.9[206718]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:12 compute-0 sudo[206716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:13 compute-0 sudo[206881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzgitlpygkpjkqeyvdqjzbnkwtqowfid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847132.9257896-1028-273289431711530/AnsiballZ_copy.py'
Jan 31 08:12:13 compute-0 sudo[206881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:13 compute-0 podman[206842]: 2026-01-31 08:12:13.331959784 +0000 UTC m=+0.192017183 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:12:13 compute-0 python3.9[206887]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:13 compute-0 sudo[206881]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:13 compute-0 ceph-mon[75294]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:13 compute-0 sudo[207046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggziosavxidptqdybspniutebmlglvny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847133.6237662-1064-169494649988556/AnsiballZ_systemd.py'
Jan 31 08:12:13 compute-0 sudo[207046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:14 compute-0 python3.9[207048]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:12:14 compute-0 systemd[1]: Reloading.
Jan 31 08:12:14 compute-0 systemd-sysv-generator[207078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:14 compute-0 systemd-rc-local-generator[207072]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:14 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 08:12:14 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 08:12:14 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 08:12:14 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 08:12:14 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 31 08:12:14 compute-0 ceph-mon[75294]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:14 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 31 08:12:14 compute-0 sudo[207046]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:15 compute-0 sudo[207239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwpcfcjhgbpnonfpobezzylawsnefikd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847134.8706942-1064-260012193076177/AnsiballZ_systemd.py'
Jan 31 08:12:15 compute-0 sudo[207239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:15 compute-0 python3.9[207241]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:12:15 compute-0 systemd[1]: Reloading.
Jan 31 08:12:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:15 compute-0 systemd-sysv-generator[207270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:15 compute-0 systemd-rc-local-generator[207265]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:15 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 08:12:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 08:12:15 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 08:12:15 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 08:12:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 08:12:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 08:12:15 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 08:12:15 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 08:12:15 compute-0 sudo[207239]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:16 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 08:12:16 compute-0 sudo[207455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmhmtbazgmhhftgufpdcreghtoqotftf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847135.9377358-1064-146574457807993/AnsiballZ_systemd.py'
Jan 31 08:12:16 compute-0 sudo[207455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:16 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 08:12:16 compute-0 python3.9[207457]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:12:16 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 08:12:16 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 08:12:16 compute-0 systemd[1]: Reloading.
Jan 31 08:12:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:16 compute-0 systemd-rc-local-generator[207487]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:16 compute-0 systemd-sysv-generator[207492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:16 compute-0 ceph-mon[75294]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:16 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 08:12:16 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 08:12:16 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 08:12:16 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 08:12:16 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 08:12:16 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 08:12:16 compute-0 sudo[207455]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:17 compute-0 sudo[207675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgryojbtncobljzssknraucfpbjqugz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847136.9588275-1064-170695346164795/AnsiballZ_systemd.py'
Jan 31 08:12:17 compute-0 sudo[207675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:17 compute-0 setroubleshoot[207385]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1d16fb5f-2227-4f38-a887-0bd7b7bccc4e
Jan 31 08:12:17 compute-0 setroubleshoot[207385]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 08:12:17 compute-0 setroubleshoot[207385]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1d16fb5f-2227-4f38-a887-0bd7b7bccc4e
Jan 31 08:12:17 compute-0 setroubleshoot[207385]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 08:12:17 compute-0 python3.9[207677]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:12:17 compute-0 systemd[1]: Reloading.
Jan 31 08:12:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:17 compute-0 systemd-rc-local-generator[207703]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:17 compute-0 systemd-sysv-generator[207708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:17 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 08:12:17 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 08:12:17 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 08:12:17 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 08:12:17 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 08:12:17 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 08:12:17 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 08:12:17 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 08:12:17 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 08:12:17 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 08:12:17 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 08:12:17 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 08:12:17 compute-0 sudo[207675]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:18 compute-0 sudo[207891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zehlcppxmyxlhitgbjbekpovrzjgkikc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847137.8921623-1064-185087219743903/AnsiballZ_systemd.py'
Jan 31 08:12:18 compute-0 sudo[207891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:18 compute-0 python3.9[207893]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:12:18 compute-0 systemd[1]: Reloading.
Jan 31 08:12:18 compute-0 podman[207895]: 2026-01-31 08:12:18.518318315 +0000 UTC m=+0.059108949 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:12:18 compute-0 systemd-sysv-generator[207938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:18 compute-0 systemd-rc-local-generator[207935]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:18 compute-0 ceph-mon[75294]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:18 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 08:12:18 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 08:12:18 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 08:12:18 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 08:12:18 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 08:12:18 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 08:12:18 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 31 08:12:18 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 31 08:12:18 compute-0 sudo[207891]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:19 compute-0 sudo[208121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amnxueegojijqvmeqjzpcjxshldqrgya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847139.0466464-1101-184048900908107/AnsiballZ_file.py'
Jan 31 08:12:19 compute-0 sudo[208121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:19 compute-0 python3.9[208123]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:19 compute-0 sudo[208121]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:19 compute-0 sudo[208273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wouzylleldbkdbsfhszlgxwiecsrmnma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847139.576041-1109-119131646086858/AnsiballZ_find.py'
Jan 31 08:12:19 compute-0 sudo[208273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:19 compute-0 python3.9[208275]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:12:19 compute-0 sudo[208273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:20 compute-0 sudo[208425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqpmxdnxkxytnvkgijkukweftpjgyzrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847140.1271865-1117-116600167321836/AnsiballZ_command.py'
Jan 31 08:12:20 compute-0 sudo[208425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:20 compute-0 sudo[208428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:20 compute-0 sudo[208428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:20 compute-0 sudo[208428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:20 compute-0 sudo[208453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:12:20 compute-0 sudo[208453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:20 compute-0 python3.9[208427]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:20 compute-0 sudo[208425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:20 compute-0 ceph-mon[75294]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:20 compute-0 sudo[208453]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:12:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:12:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:12:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:12:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:12:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:12:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:12:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:12:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:12:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:12:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:20 compute-0 sudo[208644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:20 compute-0 sudo[208644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:20 compute-0 sudo[208644]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:20 compute-0 sudo[208688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:12:20 compute-0 sudo[208688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:21 compute-0 python3.9[208683]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:12:21 compute-0 podman[208749]: 2026-01-31 08:12:21.189870637 +0000 UTC m=+0.037637194 container create 2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:12:21 compute-0 systemd[1]: Started libpod-conmon-2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa.scope.
Jan 31 08:12:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:21 compute-0 podman[208749]: 2026-01-31 08:12:21.259737938 +0000 UTC m=+0.107504505 container init 2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:12:21 compute-0 podman[208749]: 2026-01-31 08:12:21.26534005 +0000 UTC m=+0.113106597 container start 2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:21 compute-0 podman[208749]: 2026-01-31 08:12:21.170948479 +0000 UTC m=+0.018715126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:21 compute-0 podman[208749]: 2026-01-31 08:12:21.268712955 +0000 UTC m=+0.116479512 container attach 2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:12:21 compute-0 festive_nobel[208765]: 167 167
Jan 31 08:12:21 compute-0 systemd[1]: libpod-2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa.scope: Deactivated successfully.
Jan 31 08:12:21 compute-0 podman[208749]: 2026-01-31 08:12:21.270610324 +0000 UTC m=+0.118376901 container died 2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7187a6b3e0eae5a33ad6c09b8b060a813e0c87dc3214091fcaf4cbf67c3de2-merged.mount: Deactivated successfully.
Jan 31 08:12:21 compute-0 podman[208749]: 2026-01-31 08:12:21.300663615 +0000 UTC m=+0.148430172 container remove 2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:12:21 compute-0 systemd[1]: libpod-conmon-2fddb941e59cc64a0115264c7a806d124a0fa7bd140990904db4231473beccaa.scope: Deactivated successfully.
Jan 31 08:12:21 compute-0 podman[208843]: 2026-01-31 08:12:21.417727021 +0000 UTC m=+0.033985942 container create 1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_germain, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:21 compute-0 systemd[1]: Started libpod-conmon-1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c.scope.
Jan 31 08:12:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3d65a5f406ff7248286d6c2f74b2456141eb2e03400fdf3e32df36285162ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3d65a5f406ff7248286d6c2f74b2456141eb2e03400fdf3e32df36285162ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3d65a5f406ff7248286d6c2f74b2456141eb2e03400fdf3e32df36285162ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3d65a5f406ff7248286d6c2f74b2456141eb2e03400fdf3e32df36285162ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3d65a5f406ff7248286d6c2f74b2456141eb2e03400fdf3e32df36285162ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:21 compute-0 podman[208843]: 2026-01-31 08:12:21.403823729 +0000 UTC m=+0.020082680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:21 compute-0 podman[208843]: 2026-01-31 08:12:21.502063818 +0000 UTC m=+0.118322769 container init 1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:21 compute-0 podman[208843]: 2026-01-31 08:12:21.514739689 +0000 UTC m=+0.130998610 container start 1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_germain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 31 08:12:21 compute-0 podman[208843]: 2026-01-31 08:12:21.517705184 +0000 UTC m=+0.133964115 container attach 1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_germain, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:12:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:21 compute-0 python3.9[208938]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:12:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:12:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:12:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:12:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:21 compute-0 romantic_germain[208887]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:12:21 compute-0 romantic_germain[208887]: --> All data devices are unavailable
Jan 31 08:12:21 compute-0 systemd[1]: libpod-1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c.scope: Deactivated successfully.
Jan 31 08:12:21 compute-0 podman[208843]: 2026-01-31 08:12:21.945929475 +0000 UTC m=+0.562188426 container died 1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_germain, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a3d65a5f406ff7248286d6c2f74b2456141eb2e03400fdf3e32df36285162ea-merged.mount: Deactivated successfully.
Jan 31 08:12:21 compute-0 podman[208843]: 2026-01-31 08:12:21.984838961 +0000 UTC m=+0.601097882 container remove 1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:12:21 compute-0 systemd[1]: libpod-conmon-1f1c69445ef5b39a6bb32731b7e45aa5f65b4bcf08d1a8a9115bc803a15f9d3c.scope: Deactivated successfully.
Jan 31 08:12:22 compute-0 sudo[208688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:22 compute-0 sudo[209087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:22 compute-0 sudo[209087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:22 compute-0 sudo[209087]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:22 compute-0 sudo[209112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:12:22 compute-0 sudo[209112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:22 compute-0 python3.9[209086]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847141.3213472-1136-228014830097785/.source.xml follow=False _original_basename=secret.xml.j2 checksum=176662c8ceea8cc74cad374cbc55535ec63e51ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:22 compute-0 podman[209174]: 2026-01-31 08:12:22.360434098 +0000 UTC m=+0.035396598 container create 08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:12:22 compute-0 systemd[1]: Started libpod-conmon-08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58.scope.
Jan 31 08:12:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:22 compute-0 podman[209174]: 2026-01-31 08:12:22.343855197 +0000 UTC m=+0.018817717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:22 compute-0 podman[209174]: 2026-01-31 08:12:22.449067133 +0000 UTC m=+0.124029663 container init 08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sammet, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:12:22 compute-0 podman[209174]: 2026-01-31 08:12:22.455336482 +0000 UTC m=+0.130298982 container start 08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:12:22 compute-0 podman[209174]: 2026-01-31 08:12:22.458685687 +0000 UTC m=+0.133648237 container attach 08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sammet, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:12:22 compute-0 romantic_sammet[209190]: 167 167
Jan 31 08:12:22 compute-0 podman[209174]: 2026-01-31 08:12:22.462039263 +0000 UTC m=+0.137001763 container died 08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:22 compute-0 systemd[1]: libpod-08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58.scope: Deactivated successfully.
Jan 31 08:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d78f2c2f54d598dbab7ce3c762ef50edba627781c047028bc1041cb950d883e-merged.mount: Deactivated successfully.
Jan 31 08:12:22 compute-0 podman[209174]: 2026-01-31 08:12:22.503215625 +0000 UTC m=+0.178178145 container remove 08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:22 compute-0 systemd[1]: libpod-conmon-08c99d4f4ccfec0c75942c8f04917b5b5106abbd40c23175e25a9d64df01fd58.scope: Deactivated successfully.
Jan 31 08:12:22 compute-0 podman[209289]: 2026-01-31 08:12:22.653890294 +0000 UTC m=+0.046434098 container create d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:12:22 compute-0 systemd[1]: Started libpod-conmon-d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1.scope.
Jan 31 08:12:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:22 compute-0 sudo[209360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfijpucnerummrctdinateslpzmeqzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847142.4605799-1151-90901439433182/AnsiballZ_command.py'
Jan 31 08:12:22 compute-0 sudo[209360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abbb327496c9ea46e3549d59f5dc8616963b3294c698be2a52b067feae0827e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abbb327496c9ea46e3549d59f5dc8616963b3294c698be2a52b067feae0827e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abbb327496c9ea46e3549d59f5dc8616963b3294c698be2a52b067feae0827e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abbb327496c9ea46e3549d59f5dc8616963b3294c698be2a52b067feae0827e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:22 compute-0 podman[209289]: 2026-01-31 08:12:22.632561382 +0000 UTC m=+0.025105216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:22 compute-0 podman[209289]: 2026-01-31 08:12:22.736797424 +0000 UTC m=+0.129341248 container init d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:22 compute-0 podman[209289]: 2026-01-31 08:12:22.742374605 +0000 UTC m=+0.134918409 container start d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:12:22 compute-0 podman[209289]: 2026-01-31 08:12:22.74923962 +0000 UTC m=+0.141783444 container attach d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:12:22 compute-0 ceph-mon[75294]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:22 compute-0 python3.9[209362]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine dc03f344-536f-5591-add9-31059f42637c
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:22 compute-0 polkitd[43559]: Registered Authentication Agent for unix-process:209368:332531 (system bus name :1.2570 [pkttyagent --process 209368 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 08:12:22 compute-0 polkitd[43559]: Unregistered Authentication Agent for unix-process:209368:332531 (system bus name :1.2570, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 08:12:22 compute-0 polkitd[43559]: Registered Authentication Agent for unix-process:209367:332531 (system bus name :1.2571 [pkttyagent --process 209367 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 08:12:22 compute-0 polkitd[43559]: Unregistered Authentication Agent for unix-process:209367:332531 (system bus name :1.2571, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 08:12:22 compute-0 hopeful_nash[209354]: {
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:     "0": [
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:         {
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "devices": [
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "/dev/loop3"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             ],
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_name": "ceph_lv0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_size": "21470642176",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "name": "ceph_lv0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "tags": {
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cluster_name": "ceph",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.crush_device_class": "",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.encrypted": "0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.objectstore": "bluestore",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osd_id": "0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.type": "block",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.vdo": "0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.with_tpm": "0"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             },
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "type": "block",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "vg_name": "ceph_vg0"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:         }
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:     ],
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:     "1": [
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:         {
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "devices": [
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "/dev/loop4"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             ],
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_name": "ceph_lv1",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_size": "21470642176",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "name": "ceph_lv1",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "tags": {
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cluster_name": "ceph",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.crush_device_class": "",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.encrypted": "0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.objectstore": "bluestore",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osd_id": "1",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.type": "block",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.vdo": "0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.with_tpm": "0"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             },
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "type": "block",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "vg_name": "ceph_vg1"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:         }
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:     ],
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:     "2": [
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:         {
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "devices": [
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "/dev/loop5"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             ],
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_name": "ceph_lv2",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_size": "21470642176",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "name": "ceph_lv2",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "tags": {
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.cluster_name": "ceph",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.crush_device_class": "",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.encrypted": "0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.objectstore": "bluestore",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osd_id": "2",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.type": "block",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.vdo": "0",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:                 "ceph.with_tpm": "0"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             },
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "type": "block",
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:             "vg_name": "ceph_vg2"
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:         }
Jan 31 08:12:22 compute-0 hopeful_nash[209354]:     ]
Jan 31 08:12:22 compute-0 hopeful_nash[209354]: }
Jan 31 08:12:22 compute-0 sudo[209360]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:22 compute-0 systemd[1]: libpod-d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1.scope: Deactivated successfully.
Jan 31 08:12:23 compute-0 podman[209381]: 2026-01-31 08:12:23.015621369 +0000 UTC m=+0.020541852 container died d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1abbb327496c9ea46e3549d59f5dc8616963b3294c698be2a52b067feae0827e-merged.mount: Deactivated successfully.
Jan 31 08:12:23 compute-0 podman[209381]: 2026-01-31 08:12:23.043476035 +0000 UTC m=+0.048396478 container remove d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:12:23 compute-0 systemd[1]: libpod-conmon-d177d5d89be3db1db471e7edafde2044e6f5438c3eadb98a17305f59bb423ad1.scope: Deactivated successfully.
Jan 31 08:12:23 compute-0 sudo[209112]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:23 compute-0 sudo[209420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:23 compute-0 sudo[209420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:23 compute-0 sudo[209420]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:23 compute-0 sudo[209473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:12:23 compute-0 sudo[209473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:23 compute-0 podman[209608]: 2026-01-31 08:12:23.418912808 +0000 UTC m=+0.038035255 container create 2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:12:23 compute-0 systemd[1]: Started libpod-conmon-2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4.scope.
Jan 31 08:12:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:23 compute-0 podman[209608]: 2026-01-31 08:12:23.487508906 +0000 UTC m=+0.106631373 container init 2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_davinci, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:12:23 compute-0 podman[209608]: 2026-01-31 08:12:23.49362181 +0000 UTC m=+0.112744287 container start 2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_davinci, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:12:23 compute-0 podman[209608]: 2026-01-31 08:12:23.401035024 +0000 UTC m=+0.020157491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:23 compute-0 jovial_davinci[209625]: 167 167
Jan 31 08:12:23 compute-0 systemd[1]: libpod-2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4.scope: Deactivated successfully.
Jan 31 08:12:23 compute-0 podman[209608]: 2026-01-31 08:12:23.498432053 +0000 UTC m=+0.117554500 container attach 2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_davinci, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:12:23 compute-0 podman[209608]: 2026-01-31 08:12:23.498757581 +0000 UTC m=+0.117880028 container died 2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:12:23 compute-0 python3.9[209595]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b37ba1409cc7ea4a2fbb8035161435ff76211e9e83dfaa26fd1a3b851c0e4d9-merged.mount: Deactivated successfully.
Jan 31 08:12:23 compute-0 podman[209608]: 2026-01-31 08:12:23.529611662 +0000 UTC m=+0.148734109 container remove 2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:12:23 compute-0 systemd[1]: libpod-conmon-2924fe6d778d87cf26dbf022bd793759f2c86ae2f99ad15f6d2f21e90950fdc4.scope: Deactivated successfully.
Jan 31 08:12:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:23 compute-0 podman[209672]: 2026-01-31 08:12:23.665100126 +0000 UTC m=+0.037939402 container create bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:12:23 compute-0 systemd[1]: Started libpod-conmon-bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3.scope.
Jan 31 08:12:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae00beeb6a508fb243cc7ed65144057cd1102c3147300546dde627dc88203141/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae00beeb6a508fb243cc7ed65144057cd1102c3147300546dde627dc88203141/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae00beeb6a508fb243cc7ed65144057cd1102c3147300546dde627dc88203141/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae00beeb6a508fb243cc7ed65144057cd1102c3147300546dde627dc88203141/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:23 compute-0 podman[209672]: 2026-01-31 08:12:23.649303116 +0000 UTC m=+0.022142402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:23 compute-0 podman[209672]: 2026-01-31 08:12:23.74615668 +0000 UTC m=+0.118995966 container init bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:23 compute-0 podman[209672]: 2026-01-31 08:12:23.751168846 +0000 UTC m=+0.124008152 container start bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:12:23 compute-0 podman[209672]: 2026-01-31 08:12:23.754912841 +0000 UTC m=+0.127752107 container attach bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:12:23 compute-0 sudo[209819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqqozeahqcfynijrrkexdmmbhqcghzrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847143.6590939-1167-36907496168608/AnsiballZ_command.py'
Jan 31 08:12:23 compute-0 sudo[209819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:24 compute-0 sudo[209819]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:24 compute-0 lvm[209942]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:12:24 compute-0 lvm[209942]: VG ceph_vg0 finished
Jan 31 08:12:24 compute-0 lvm[209949]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:12:24 compute-0 lvm[209949]: VG ceph_vg1 finished
Jan 31 08:12:24 compute-0 lvm[209948]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:12:24 compute-0 lvm[209948]: VG ceph_vg2 finished
Jan 31 08:12:24 compute-0 admiring_jones[209734]: {}
Jan 31 08:12:24 compute-0 systemd[1]: libpod-bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3.scope: Deactivated successfully.
Jan 31 08:12:24 compute-0 systemd[1]: libpod-bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3.scope: Consumed 1.009s CPU time.
Jan 31 08:12:24 compute-0 podman[209672]: 2026-01-31 08:12:24.435727032 +0000 UTC m=+0.808566328 container died bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae00beeb6a508fb243cc7ed65144057cd1102c3147300546dde627dc88203141-merged.mount: Deactivated successfully.
Jan 31 08:12:24 compute-0 sudo[210062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxzsnukvkckkbrvhlahxgtkobbtwoazp ; FSID=dc03f344-536f-5591-add9-31059f42637c KEY=AQBmtX1pAAAAABAAGlx/43NfN+tI0V7rwdqN7g== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847144.3134716-1175-181052586984748/AnsiballZ_command.py'
Jan 31 08:12:24 compute-0 sudo[210062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:24 compute-0 podman[209672]: 2026-01-31 08:12:24.719909423 +0000 UTC m=+1.092748729 container remove bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:12:24 compute-0 systemd[1]: libpod-conmon-bd8a3c58e4993ff8ba005a46ddde5dc81b9b1370232a93dd6c38cd4489fa69a3.scope: Deactivated successfully.
Jan 31 08:12:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:24 compute-0 sudo[209473]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:12:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:12:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:12:24 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:12:24 compute-0 polkitd[43559]: Registered Authentication Agent for unix-process:210073:332724 (system bus name :1.2585 [pkttyagent --process 210073 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 08:12:24 compute-0 sudo[210067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:12:24 compute-0 sudo[210067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:24 compute-0 polkitd[43559]: Unregistered Authentication Agent for unix-process:210073:332724 (system bus name :1.2585, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 08:12:24 compute-0 sudo[210067]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:24 compute-0 ceph-mon[75294]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:24 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:12:24 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:12:24 compute-0 sudo[210062]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:25 compute-0 sudo[210247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xssmtgeisienpqjrhvgykixdkverkiaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847145.1189158-1183-138666276053538/AnsiballZ_copy.py'
Jan 31 08:12:25 compute-0 sudo[210247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:25 compute-0 python3.9[210249]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:25 compute-0 sudo[210247]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:26 compute-0 sudo[210399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjcmydmamyeyjclagclhluedgohkged ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847145.7675798-1191-178274779394698/AnsiballZ_stat.py'
Jan 31 08:12:26 compute-0 sudo[210399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:26 compute-0 python3.9[210401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:26 compute-0 sudo[210399]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:26 compute-0 sudo[210522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wazdyzoaiyodiccfsbewlnmpwkacefnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847145.7675798-1191-178274779394698/AnsiballZ_copy.py'
Jan 31 08:12:26 compute-0 sudo[210522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:26 compute-0 python3.9[210524]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847145.7675798-1191-178274779394698/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:26 compute-0 sudo[210522]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:26 compute-0 ceph-mon[75294]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:27 compute-0 sudo[210674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gydqeyktqbthjodzdiahchvmtqgadyqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847147.0186048-1207-96407213241454/AnsiballZ_file.py'
Jan 31 08:12:27 compute-0 sudo[210674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:27 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 08:12:27 compute-0 python3.9[210676]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:27 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 08:12:27 compute-0 sudo[210674]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:27 compute-0 sudo[210826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cepydchfdbbqqdnsaqebkenkbqzjfjnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847147.7435453-1215-179047992924727/AnsiballZ_stat.py'
Jan 31 08:12:27 compute-0 sudo[210826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:28 compute-0 python3.9[210828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:28 compute-0 sudo[210826]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:28 compute-0 sudo[210904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bocbqcmfsxpmjvurysclpsgcgeepwrsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847147.7435453-1215-179047992924727/AnsiballZ_file.py'
Jan 31 08:12:28 compute-0 sudo[210904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:28 compute-0 python3.9[210906]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:28 compute-0 sudo[210904]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:28 compute-0 sudo[211056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmjffscydjuirbvxxocgwzwlwvyrqcac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847148.6982236-1227-173220771514996/AnsiballZ_stat.py'
Jan 31 08:12:28 compute-0 sudo[211056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:28 compute-0 ceph-mon[75294]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:29 compute-0 python3.9[211058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:29 compute-0 sudo[211056]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-0 sudo[211134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvofdmhyqwxraxbpycnbewricihjcpnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847148.6982236-1227-173220771514996/AnsiballZ_file.py'
Jan 31 08:12:29 compute-0 sudo[211134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:29 compute-0 python3.9[211136]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7l3tqorn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:29 compute-0 sudo[211134]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-0 sudo[211286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbmubgriiugbqvwgkeigqlguvqhcfbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847149.6923852-1239-131764218824164/AnsiballZ_stat.py'
Jan 31 08:12:29 compute-0 sudo[211286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:30 compute-0 python3.9[211288]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:30 compute-0 sudo[211286]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:30 compute-0 sudo[211364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxagfgrfkidldcrvryzdntysrtaozugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847149.6923852-1239-131764218824164/AnsiballZ_file.py'
Jan 31 08:12:30 compute-0 sudo[211364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:30 compute-0 python3.9[211366]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:30 compute-0 sudo[211364]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:30 compute-0 sudo[211516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxrwjvfsijjskgnzfynvkzyqfhukqfle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847150.7010546-1252-101616428492382/AnsiballZ_command.py'
Jan 31 08:12:30 compute-0 sudo[211516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:31 compute-0 ceph-mon[75294]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:31 compute-0 python3.9[211518]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:31 compute-0 sudo[211516]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:31 compute-0 sudo[211669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgdaezckyjtfndeacqndbxuochwfsshb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847151.35664-1260-272330859533732/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 08:12:31 compute-0 sudo[211669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:31 compute-0 python3[211671]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 08:12:31 compute-0 sudo[211669]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:32 compute-0 sudo[211821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrpvhjgnzanyqetdzzqhojgiocgnqap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847152.168968-1268-90565868660428/AnsiballZ_stat.py'
Jan 31 08:12:32 compute-0 sudo[211821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:32 compute-0 python3.9[211823]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:32 compute-0 sudo[211821]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:32 compute-0 sudo[211899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrsburrritllarrarimcvpqtbwmcgflm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847152.168968-1268-90565868660428/AnsiballZ_file.py'
Jan 31 08:12:32 compute-0 sudo[211899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:33 compute-0 python3.9[211901]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:33 compute-0 ceph-mon[75294]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:33 compute-0 sudo[211899]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:33 compute-0 sudo[212051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximeuzbhgnjeofnrgfcbbmsqltbbagtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847153.2866457-1280-186348066565385/AnsiballZ_stat.py'
Jan 31 08:12:33 compute-0 sudo[212051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:33 compute-0 python3.9[212053]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:33 compute-0 sudo[212051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:34 compute-0 sudo[212176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmqljfzaturbgqcflxvslhpztblqdjcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847153.2866457-1280-186348066565385/AnsiballZ_copy.py'
Jan 31 08:12:34 compute-0 sudo[212176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:34 compute-0 python3.9[212178]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847153.2866457-1280-186348066565385/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:34 compute-0 sudo[212176]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:34 compute-0 sudo[212328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjplpyflxsobcnfflzxjfrzvmrfnjoic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847154.3815825-1295-238218398155802/AnsiballZ_stat.py'
Jan 31 08:12:34 compute-0 sudo[212328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:34 compute-0 python3.9[212330]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:34 compute-0 sudo[212328]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:34 compute-0 sudo[212406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vezuaohxgfrlwbzdyxdqzfkczkvfgngu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847154.3815825-1295-238218398155802/AnsiballZ_file.py'
Jan 31 08:12:34 compute-0 sudo[212406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:35 compute-0 python3.9[212408]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:35 compute-0 ceph-mon[75294]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:35 compute-0 sudo[212406]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-0 sudo[212558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdilhdabowqqmllfkixzixwleezpekyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847155.2966604-1307-194082278855033/AnsiballZ_stat.py'
Jan 31 08:12:35 compute-0 sudo[212558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:35 compute-0 python3.9[212560]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:35 compute-0 sudo[212558]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-0 sudo[212636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwzyknjyisyblsfmxgzoxgneudinybfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847155.2966604-1307-194082278855033/AnsiballZ_file.py'
Jan 31 08:12:35 compute-0 sudo[212636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:36 compute-0 python3.9[212638]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:36 compute-0 sudo[212636]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:36 compute-0 sudo[212788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryklaodicoeblbzyeqwacacfnpmofceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847156.3211684-1319-93905360117381/AnsiballZ_stat.py'
Jan 31 08:12:36 compute-0 sudo[212788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:36 compute-0 python3.9[212790]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:36 compute-0 sudo[212788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:37 compute-0 sudo[212913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlkgggkslxemchtpifddhseqwdtqwvqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847156.3211684-1319-93905360117381/AnsiballZ_copy.py'
Jan 31 08:12:37 compute-0 sudo[212913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:37 compute-0 ceph-mon[75294]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:37 compute-0 python3.9[212915]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847156.3211684-1319-93905360117381/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:37 compute-0 sudo[212913]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:37 compute-0 sudo[213065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onfhguosgibcamasluecertnttqviucf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847157.408356-1334-4698546823379/AnsiballZ_file.py'
Jan 31 08:12:37 compute-0 sudo[213065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:37 compute-0 python3.9[213067]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:37 compute-0 sudo[213065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:38 compute-0 sudo[213217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abynctjtjpqqlnmxekxeklzfikwkxxjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847157.9269416-1342-233544393176173/AnsiballZ_command.py'
Jan 31 08:12:38 compute-0 sudo[213217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:38 compute-0 python3.9[213219]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:38 compute-0 sudo[213217]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:38 compute-0 sudo[213372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggzijzskvfcazukuxhzbdcyquldyxwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847158.5012703-1350-217967581186935/AnsiballZ_blockinfile.py'
Jan 31 08:12:38 compute-0 sudo[213372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:39 compute-0 python3.9[213374]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:39 compute-0 sudo[213372]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:39 compute-0 ceph-mon[75294]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:39 compute-0 sudo[213524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmqtxzytbtsztreoppjmtrhxkzupgqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847159.2623992-1359-89646273453754/AnsiballZ_command.py'
Jan 31 08:12:39 compute-0 sudo[213524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:39 compute-0 python3.9[213526]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:39 compute-0 sudo[213524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:40 compute-0 sudo[213677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdoxmptgeprtwvfgskeockvhyloglgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847159.8924682-1367-148730532098629/AnsiballZ_stat.py'
Jan 31 08:12:40 compute-0 sudo[213677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:40 compute-0 python3.9[213679]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:12:40 compute-0 sudo[213677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:40 compute-0 sudo[213831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viickekzdulvrxrulftxhrjdjifavrxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847160.482231-1375-244069172457321/AnsiballZ_command.py'
Jan 31 08:12:40 compute-0 sudo[213831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:40 compute-0 python3.9[213833]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:40 compute-0 sudo[213831]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:41 compute-0 sudo[213986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbkxuvvfvgkqosborqgdxfdtsqhlely ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847161.1182127-1383-142136612669760/AnsiballZ_file.py'
Jan 31 08:12:41 compute-0 sudo[213986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:41 compute-0 ceph-mon[75294]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:41 compute-0 python3.9[213988]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:41 compute-0 sudo[213986]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:42 compute-0 sudo[214138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdmsivyujxcazwxadgrnlcjcllhfwske ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847161.8023138-1391-269855678473048/AnsiballZ_stat.py'
Jan 31 08:12:42 compute-0 sudo[214138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:42 compute-0 python3.9[214140]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:42 compute-0 sudo[214138]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:42 compute-0 ceph-mon[75294]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:42 compute-0 sudo[214261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlswobxtibggiomviqjpxprnobwmzhgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847161.8023138-1391-269855678473048/AnsiballZ_copy.py'
Jan 31 08:12:42 compute-0 sudo[214261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:42 compute-0 python3.9[214263]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847161.8023138-1391-269855678473048/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:42 compute-0 sudo[214261]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:43 compute-0 sudo[214413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjvgwewguwtbptaqywsxacocgktgcqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847162.9734628-1406-139281320455270/AnsiballZ_stat.py'
Jan 31 08:12:43 compute-0 sudo[214413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:43 compute-0 python3.9[214415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:43 compute-0 sudo[214413]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:43 compute-0 sudo[214551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fheltcdeecclzuefaguyxiyeidvbhrda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847162.9734628-1406-139281320455270/AnsiballZ_copy.py'
Jan 31 08:12:43 compute-0 sudo[214551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:43 compute-0 podman[214510]: 2026-01-31 08:12:43.85724057 +0000 UTC m=+0.089458068 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:12:43 compute-0 python3.9[214557]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847162.9734628-1406-139281320455270/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:44 compute-0 sudo[214551]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-0 sudo[214714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtlqtnjiqqqniqqllownhkfedsobkjyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847164.1344666-1421-162078250100997/AnsiballZ_stat.py'
Jan 31 08:12:44 compute-0 sudo[214714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:44 compute-0 python3.9[214716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:44 compute-0 sudo[214714]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-0 ceph-mon[75294]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:44 compute-0 sudo[214837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgfwyqxihfpadhpmryknqbmjfmkrcjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847164.1344666-1421-162078250100997/AnsiballZ_copy.py'
Jan 31 08:12:44 compute-0 sudo[214837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:45 compute-0 python3.9[214839]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847164.1344666-1421-162078250100997/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:45 compute-0 sudo[214837]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:45 compute-0 sudo[214989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcvyerxeaprwoxlsyjshtjdsoiutvlzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847165.2306898-1436-157573560473649/AnsiballZ_systemd.py'
Jan 31 08:12:45 compute-0 sudo[214989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:45 compute-0 python3.9[214991]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:12:45 compute-0 systemd[1]: Reloading.
Jan 31 08:12:45 compute-0 systemd-sysv-generator[215017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:45 compute-0 systemd-rc-local-generator[215013]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:46 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 08:12:46 compute-0 sudo[214989]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:46 compute-0 sudo[215180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qprovrznhxujskqfeinkajatpomglilv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847166.1691585-1444-136227716272437/AnsiballZ_systemd.py'
Jan 31 08:12:46 compute-0 sudo[215180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:46 compute-0 python3.9[215182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 08:12:46 compute-0 systemd[1]: Reloading.
Jan 31 08:12:46 compute-0 ceph-mon[75294]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:46 compute-0 systemd-rc-local-generator[215204]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:46 compute-0 systemd-sysv-generator[215208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:12:46.953 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:12:46.955 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:12:46.955 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:47 compute-0 systemd[1]: Reloading.
Jan 31 08:12:47 compute-0 systemd-rc-local-generator[215245]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:12:47 compute-0 systemd-sysv-generator[215249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:12:47 compute-0 sudo[215180]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:47 compute-0 sshd-session[156358]: Connection closed by 192.168.122.30 port 52322
Jan 31 08:12:47 compute-0 sshd-session[156355]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:12:47 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 08:12:47 compute-0 systemd[1]: session-49.scope: Consumed 2min 53.794s CPU time.
Jan 31 08:12:47 compute-0 systemd-logind[810]: Session 49 logged out. Waiting for processes to exit.
Jan 31 08:12:47 compute-0 systemd-logind[810]: Removed session 49.
Jan 31 08:12:48 compute-0 ceph-mon[75294]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:49 compute-0 podman[215278]: 2026-01-31 08:12:49.220493054 +0000 UTC m=+0.083969808 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 31 08:12:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:12:50
Jan 31 08:12:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:12:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:12:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Jan 31 08:12:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:12:51 compute-0 ceph-mon[75294]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:53 compute-0 sshd-session[215297]: Accepted publickey for zuul from 192.168.122.30 port 35682 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:12:53 compute-0 systemd-logind[810]: New session 50 of user zuul.
Jan 31 08:12:53 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 31 08:12:53 compute-0 sshd-session[215297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:12:53 compute-0 ceph-mon[75294]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:54 compute-0 python3.9[215450]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:12:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:55 compute-0 python3.9[215604]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:12:55 compute-0 network[215621]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:12:55 compute-0 network[215622]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:12:55 compute-0 network[215623]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:12:55 compute-0 ceph-mon[75294]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:12:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:12:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:56 compute-0 ceph-mon[75294]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:58 compute-0 sudo[215893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mijuljuurewywckukgkkzwjnwprltkxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847177.7910829-42-129888502463085/AnsiballZ_setup.py'
Jan 31 08:12:58 compute-0 sudo[215893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:58 compute-0 python3.9[215895]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:12:58 compute-0 sudo[215893]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:58 compute-0 ceph-mon[75294]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:58 compute-0 sudo[215977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thqqlonxklqfkamptenujbldtigxvtdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847177.7910829-42-129888502463085/AnsiballZ_dnf.py'
Jan 31 08:12:58 compute-0 sudo[215977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:59 compute-0 python3.9[215979]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:12:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:00 compute-0 ceph-mon[75294]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:02 compute-0 ceph-mon[75294]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:04 compute-0 sudo[215977]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:04 compute-0 ceph-mon[75294]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:05 compute-0 sudo[216130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfmmqbxuwzguzbpfktgzevbodfoyruqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847184.7376072-54-163178632438050/AnsiballZ_stat.py'
Jan 31 08:13:05 compute-0 sudo[216130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:05 compute-0 python3.9[216132]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:13:05 compute-0 sudo[216130]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:05 compute-0 sudo[216282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rylagsgbxudpgthxzwllihomjsadunkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847185.4677138-64-64586149541281/AnsiballZ_command.py'
Jan 31 08:13:05 compute-0 sudo[216282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:06 compute-0 python3.9[216284]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:13:06 compute-0 sudo[216282]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:13:06 compute-0 sudo[216435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cicuxlilvmeqselzjaffkodgthrzbqgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847186.3185396-74-141140971803227/AnsiballZ_stat.py'
Jan 31 08:13:06 compute-0 sudo[216435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:06 compute-0 python3.9[216437]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:13:06 compute-0 sudo[216435]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:06 compute-0 ceph-mon[75294]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:07 compute-0 sudo[216587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhugvpxmrmtqawdrjnyrrvtquvtmwjfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847187.0122926-82-137836828024962/AnsiballZ_command.py'
Jan 31 08:13:07 compute-0 sudo[216587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:07 compute-0 python3.9[216589]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:13:07 compute-0 sudo[216587]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:07 compute-0 sudo[216740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jswjkuwkbetrwpyxdgevuegqnjeberhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847187.5358348-90-197166412664973/AnsiballZ_stat.py'
Jan 31 08:13:07 compute-0 sudo[216740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:07 compute-0 python3.9[216742]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:07 compute-0 sudo[216740]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:08 compute-0 sudo[216863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilecekpqvpsnwuluikrjwfdmmigpawhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847187.5358348-90-197166412664973/AnsiballZ_copy.py'
Jan 31 08:13:08 compute-0 sudo[216863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:08 compute-0 python3.9[216865]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847187.5358348-90-197166412664973/.source.iscsi _original_basename=.oubpo91d follow=False checksum=3b2a40f36bb808f7be3300aec9b7dd302354aed0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:08 compute-0 sudo[216863]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:09 compute-0 ceph-mon[75294]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:09 compute-0 sudo[217015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvofhztcuxiiefaefgvkjvsicjhuaaco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847188.783993-105-217660964514438/AnsiballZ_file.py'
Jan 31 08:13:09 compute-0 sudo[217015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:09 compute-0 python3.9[217017]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:09 compute-0 sudo[217015]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:09 compute-0 sudo[217167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzunghyxndsdfnxeieojuqdkuqebuftv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847189.4775186-113-230286760206712/AnsiballZ_lineinfile.py'
Jan 31 08:13:09 compute-0 sudo[217167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:10 compute-0 python3.9[217169]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:10 compute-0 sudo[217167]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:10 compute-0 sudo[217319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-splpsutgmfjkqtihzervtsmwesxksapc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847190.2160645-122-19792876782874/AnsiballZ_systemd_service.py'
Jan 31 08:13:10 compute-0 sudo[217319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:11 compute-0 python3.9[217321]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:11 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 08:13:11 compute-0 sudo[217319]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:11 compute-0 sudo[217475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytikjrcegixabyqpgnizkemqbkptswbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847191.1994004-130-201185846049562/AnsiballZ_systemd_service.py'
Jan 31 08:13:11 compute-0 sudo[217475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:11 compute-0 ceph-mon[75294]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:11 compute-0 python3.9[217477]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:11 compute-0 systemd[1]: Reloading.
Jan 31 08:13:11 compute-0 systemd-rc-local-generator[217505]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:11 compute-0 systemd-sysv-generator[217510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:11 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 08:13:11 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 08:13:12 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 08:13:12 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 08:13:12 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 08:13:12 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 08:13:12 compute-0 sudo[217475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:12 compute-0 python3.9[217677]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:13:12 compute-0 network[217694]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:13:12 compute-0 network[217695]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:13:12 compute-0 network[217696]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:13:13 compute-0 ceph-mon[75294]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:13 compute-0 podman[217738]: 2026-01-31 08:13:13.998273354 +0000 UTC m=+0.079723451 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:13:14 compute-0 ceph-mon[75294]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:15 compute-0 sudo[217994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjtqseqxanpdqmhutpkzzrrdvgydwuoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847195.1509414-153-56269467190789/AnsiballZ_dnf.py'
Jan 31 08:13:15 compute-0 sudo[217994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:15 compute-0 python3.9[217996]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:13:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:16 compute-0 ceph-mon[75294]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:17 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:13:17 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:13:17 compute-0 systemd[1]: Reloading.
Jan 31 08:13:17 compute-0 systemd-rc-local-generator[218038]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:17 compute-0 systemd-sysv-generator[218042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:13:18 compute-0 ceph-mon[75294]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:19 compute-0 sudo[217994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:19 compute-0 sudo[218321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yroejbjwmtdvkkggitjgcebprieahxlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847199.378831-162-244625512424778/AnsiballZ_file.py'
Jan 31 08:13:19 compute-0 sudo[218321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:19 compute-0 podman[218284]: 2026-01-31 08:13:19.727461282 +0000 UTC m=+0.103048302 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 08:13:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:13:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:13:19 compute-0 systemd[1]: run-r4f5484286f70489da43b094921182db1.service: Deactivated successfully.
Jan 31 08:13:19 compute-0 python3.9[218325]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 08:13:19 compute-0 sudo[218321]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:20 compute-0 sudo[218482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlryfvshfkplsnugqsrexpisfxeshrzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847200.1071875-170-47207027307174/AnsiballZ_modprobe.py'
Jan 31 08:13:20 compute-0 sudo[218482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:20 compute-0 python3.9[218484]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 08:13:20 compute-0 sudo[218482]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:20 compute-0 ceph-mon[75294]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:21 compute-0 sudo[218638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyobptnqxvdbzhrhyzmpfqhharrosbnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847200.8829608-178-215041994176685/AnsiballZ_stat.py'
Jan 31 08:13:21 compute-0 sudo[218638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:21 compute-0 python3.9[218640]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:21 compute-0 sudo[218638]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:21 compute-0 sudo[218761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whtzkraeqcqmwrgjqftlalhngsakmtsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847200.8829608-178-215041994176685/AnsiballZ_copy.py'
Jan 31 08:13:21 compute-0 sudo[218761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:21 compute-0 python3.9[218763]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847200.8829608-178-215041994176685/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:21 compute-0 sudo[218761]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:22 compute-0 sudo[218913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejgwlegimvzhowbnwfsrkyljhmddtzfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847201.98716-194-160386020572717/AnsiballZ_lineinfile.py'
Jan 31 08:13:22 compute-0 sudo[218913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:22 compute-0 python3.9[218915]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:22 compute-0 sudo[218913]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:22 compute-0 ceph-mon[75294]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:23 compute-0 sudo[219065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekuaqrhzjlvsbvgcfasrdokkvsmbmjrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847202.6147506-202-88001327281235/AnsiballZ_systemd.py'
Jan 31 08:13:23 compute-0 sudo[219065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:23 compute-0 python3.9[219067]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:13:23 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 08:13:23 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 08:13:23 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 08:13:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 08:13:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 08:13:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:23 compute-0 sudo[219065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:23 compute-0 sudo[219221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehmuqluchyberpgjpbykrxcjusxcdvqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847203.7224367-210-237626791150782/AnsiballZ_command.py'
Jan 31 08:13:23 compute-0 sudo[219221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:24 compute-0 python3.9[219223]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:13:24 compute-0 sudo[219221]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-0 sudo[219374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kryqxtchvtxxbeodttkwbnxrcnscdlti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847204.4074788-220-168401650653110/AnsiballZ_stat.py'
Jan 31 08:13:24 compute-0 sudo[219374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:24 compute-0 python3.9[219376]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:13:24 compute-0 sudo[219374]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-0 sudo[219401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:24 compute-0 sudo[219401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:24 compute-0 sudo[219401]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-0 sudo[219426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:13:24 compute-0 sudo[219426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:25 compute-0 sudo[219588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcrphaskztkishphmgizpkbehruutjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847204.9533298-229-103149023322436/AnsiballZ_stat.py'
Jan 31 08:13:25 compute-0 sudo[219588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:25 compute-0 ceph-mon[75294]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:25 compute-0 python3.9[219592]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:25 compute-0 sudo[219588]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:25 compute-0 sudo[219426]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:13:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:13:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:13:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:13:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:13:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:13:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:13:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:13:25 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:13:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:13:25 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:25 compute-0 sudo[219704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:25 compute-0 sudo[219704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:25 compute-0 sudo[219704]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:25 compute-0 sudo[219754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvrokkwczyhtclqcnsqzzdqprkmukwwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847204.9533298-229-103149023322436/AnsiballZ_copy.py'
Jan 31 08:13:25 compute-0 sudo[219754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:25 compute-0 sudo[219756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:13:25 compute-0 sudo[219756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:25 compute-0 python3.9[219761]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847204.9533298-229-103149023322436/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:25 compute-0 sudo[219754]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:25 compute-0 podman[219818]: 2026-01-31 08:13:25.960093486 +0000 UTC m=+0.055350594 container create 01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:26 compute-0 systemd[1]: Started libpod-conmon-01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09.scope.
Jan 31 08:13:26 compute-0 podman[219818]: 2026-01-31 08:13:25.925444978 +0000 UTC m=+0.020702126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:26 compute-0 podman[219818]: 2026-01-31 08:13:26.062015609 +0000 UTC m=+0.157272717 container init 01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bouman, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 31 08:13:26 compute-0 podman[219818]: 2026-01-31 08:13:26.068461242 +0000 UTC m=+0.163718350 container start 01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bouman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:26 compute-0 heuristic_bouman[219878]: 167 167
Jan 31 08:13:26 compute-0 systemd[1]: libpod-01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09.scope: Deactivated successfully.
Jan 31 08:13:26 compute-0 podman[219818]: 2026-01-31 08:13:26.080415865 +0000 UTC m=+0.175673023 container attach 01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bouman, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:26 compute-0 podman[219818]: 2026-01-31 08:13:26.080917278 +0000 UTC m=+0.176174406 container died 01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bouman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2e721abb663a7073e62d23189f8c99cb860d64c4d8f07c98aa45df27df55faa-merged.mount: Deactivated successfully.
Jan 31 08:13:26 compute-0 podman[219818]: 2026-01-31 08:13:26.164630949 +0000 UTC m=+0.259888047 container remove 01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bouman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:13:26 compute-0 systemd[1]: libpod-conmon-01a1cbb1ec99d8c27df14ba6e37e4b3560e367ada46af0f89418875488171b09.scope: Deactivated successfully.
Jan 31 08:13:26 compute-0 sudo[219979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiwgjcfkezcgsogedpflpyjjpzwvpdbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847205.9671972-244-165731715740748/AnsiballZ_command.py'
Jan 31 08:13:26 compute-0 sudo[219979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:13:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:13:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:13:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:13:26 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:26 compute-0 podman[219987]: 2026-01-31 08:13:26.273494837 +0000 UTC m=+0.039454131 container create 7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:13:26 compute-0 systemd[1]: Started libpod-conmon-7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618.scope.
Jan 31 08:13:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6362882cf97c8066b5fa8cce89e34972bb37b08fdf435f11145a03db65b3834b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6362882cf97c8066b5fa8cce89e34972bb37b08fdf435f11145a03db65b3834b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6362882cf97c8066b5fa8cce89e34972bb37b08fdf435f11145a03db65b3834b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6362882cf97c8066b5fa8cce89e34972bb37b08fdf435f11145a03db65b3834b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6362882cf97c8066b5fa8cce89e34972bb37b08fdf435f11145a03db65b3834b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:26 compute-0 podman[219987]: 2026-01-31 08:13:26.257864621 +0000 UTC m=+0.023823935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:26 compute-0 podman[219987]: 2026-01-31 08:13:26.373824389 +0000 UTC m=+0.139783743 container init 7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:26 compute-0 podman[219987]: 2026-01-31 08:13:26.378110778 +0000 UTC m=+0.144070072 container start 7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:26 compute-0 podman[219987]: 2026-01-31 08:13:26.382550261 +0000 UTC m=+0.148509625 container attach 7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:13:26 compute-0 python3.9[219981]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:13:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:26 compute-0 epic_meitner[220004]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:13:26 compute-0 epic_meitner[220004]: --> All data devices are unavailable
Jan 31 08:13:26 compute-0 systemd[1]: libpod-7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618.scope: Deactivated successfully.
Jan 31 08:13:26 compute-0 podman[219987]: 2026-01-31 08:13:26.799172937 +0000 UTC m=+0.565132231 container died 7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6362882cf97c8066b5fa8cce89e34972bb37b08fdf435f11145a03db65b3834b-merged.mount: Deactivated successfully.
Jan 31 08:13:26 compute-0 podman[219987]: 2026-01-31 08:13:26.855990087 +0000 UTC m=+0.621949381 container remove 7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 08:13:26 compute-0 systemd[1]: libpod-conmon-7ebac1437a197ea4e1c1f101d330c776c12b46d10073bdbdde7d82d323e89618.scope: Deactivated successfully.
Jan 31 08:13:26 compute-0 sudo[219756]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:26 compute-0 sudo[220040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:26 compute-0 sudo[220040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:26 compute-0 sudo[220040]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:26 compute-0 sudo[220065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:13:26 compute-0 sudo[220065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:27 compute-0 ceph-mon[75294]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:27 compute-0 podman[220101]: 2026-01-31 08:13:27.278789199 +0000 UTC m=+0.050194142 container create 9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brown, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:27 compute-0 systemd[1]: Started libpod-conmon-9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057.scope.
Jan 31 08:13:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:27 compute-0 podman[220101]: 2026-01-31 08:13:27.258262869 +0000 UTC m=+0.029667822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:27 compute-0 podman[220101]: 2026-01-31 08:13:27.358917119 +0000 UTC m=+0.130322072 container init 9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:13:27 compute-0 podman[220101]: 2026-01-31 08:13:27.367703442 +0000 UTC m=+0.139108375 container start 9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:13:27 compute-0 inspiring_brown[220117]: 167 167
Jan 31 08:13:27 compute-0 systemd[1]: libpod-9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057.scope: Deactivated successfully.
Jan 31 08:13:27 compute-0 podman[220101]: 2026-01-31 08:13:27.372932065 +0000 UTC m=+0.144336998 container attach 9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 08:13:27 compute-0 podman[220101]: 2026-01-31 08:13:27.37434324 +0000 UTC m=+0.145748183 container died 9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c5fbec0fc77619417369936e118aa3197af412975449f541455b0f593fac0f2-merged.mount: Deactivated successfully.
Jan 31 08:13:27 compute-0 sudo[219979]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:27 compute-0 podman[220101]: 2026-01-31 08:13:27.422205753 +0000 UTC m=+0.193610696 container remove 9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brown, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:27 compute-0 systemd[1]: libpod-conmon-9e00fc793cf2bd2904058a510dcd8186db79b8d18241c8f62ef2ac0dd1a86057.scope: Deactivated successfully.
Jan 31 08:13:27 compute-0 podman[220165]: 2026-01-31 08:13:27.55985409 +0000 UTC m=+0.053464295 container create c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:13:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:27 compute-0 systemd[1]: Started libpod-conmon-c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764.scope.
Jan 31 08:13:27 compute-0 podman[220165]: 2026-01-31 08:13:27.530934628 +0000 UTC m=+0.024544853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351d98f75938acbd653209c56795a6dd8a04d22d140659004efdcea4b50bf8ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351d98f75938acbd653209c56795a6dd8a04d22d140659004efdcea4b50bf8ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351d98f75938acbd653209c56795a6dd8a04d22d140659004efdcea4b50bf8ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351d98f75938acbd653209c56795a6dd8a04d22d140659004efdcea4b50bf8ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:27 compute-0 podman[220165]: 2026-01-31 08:13:27.657613907 +0000 UTC m=+0.151224112 container init c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:27 compute-0 podman[220165]: 2026-01-31 08:13:27.665293662 +0000 UTC m=+0.158903858 container start c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:13:27 compute-0 podman[220165]: 2026-01-31 08:13:27.673740767 +0000 UTC m=+0.167350992 container attach c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:13:27 compute-0 sudo[220312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmpjbzomuscrwaxxiabotuhhgnhougnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847207.542571-252-226517378906886/AnsiballZ_lineinfile.py'
Jan 31 08:13:27 compute-0 sudo[220312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:27 compute-0 modest_torvalds[220234]: {
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:     "0": [
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:         {
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "devices": [
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "/dev/loop3"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             ],
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_name": "ceph_lv0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_size": "21470642176",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "name": "ceph_lv0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "tags": {
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cluster_name": "ceph",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.crush_device_class": "",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.encrypted": "0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.objectstore": "bluestore",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osd_id": "0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.type": "block",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.vdo": "0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.with_tpm": "0"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             },
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "type": "block",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "vg_name": "ceph_vg0"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:         }
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:     ],
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:     "1": [
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:         {
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "devices": [
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "/dev/loop4"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             ],
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_name": "ceph_lv1",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_size": "21470642176",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "name": "ceph_lv1",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "tags": {
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cluster_name": "ceph",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.crush_device_class": "",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.encrypted": "0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.objectstore": "bluestore",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osd_id": "1",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.type": "block",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.vdo": "0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.with_tpm": "0"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             },
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "type": "block",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "vg_name": "ceph_vg1"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:         }
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:     ],
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:     "2": [
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:         {
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "devices": [
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "/dev/loop5"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             ],
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_name": "ceph_lv2",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_size": "21470642176",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "name": "ceph_lv2",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "tags": {
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.cluster_name": "ceph",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.crush_device_class": "",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.encrypted": "0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.objectstore": "bluestore",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osd_id": "2",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.type": "block",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.vdo": "0",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:                 "ceph.with_tpm": "0"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             },
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "type": "block",
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:             "vg_name": "ceph_vg2"
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:         }
Jan 31 08:13:27 compute-0 modest_torvalds[220234]:     ]
Jan 31 08:13:27 compute-0 modest_torvalds[220234]: }
Jan 31 08:13:27 compute-0 systemd[1]: libpod-c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764.scope: Deactivated successfully.
Jan 31 08:13:27 compute-0 podman[220165]: 2026-01-31 08:13:27.956485491 +0000 UTC m=+0.450095706 container died c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-351d98f75938acbd653209c56795a6dd8a04d22d140659004efdcea4b50bf8ce-merged.mount: Deactivated successfully.
Jan 31 08:13:28 compute-0 python3.9[220314]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:28 compute-0 podman[220165]: 2026-01-31 08:13:28.012835489 +0000 UTC m=+0.506445694 container remove c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_torvalds, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:13:28 compute-0 systemd[1]: libpod-conmon-c465078f13239f40f5397ad10d3c3f3472094eebbffd529b55acb5bb5c3b3764.scope: Deactivated successfully.
Jan 31 08:13:28 compute-0 sudo[220312]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:28 compute-0 sudo[220065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:28 compute-0 sudo[220355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:28 compute-0 sudo[220355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:28 compute-0 sudo[220355]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:28 compute-0 sudo[220380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:13:28 compute-0 sudo[220380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:28 compute-0 podman[220469]: 2026-01-31 08:13:28.417056581 +0000 UTC m=+0.048262204 container create 743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brown, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:28 compute-0 systemd[1]: Started libpod-conmon-743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68.scope.
Jan 31 08:13:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:28 compute-0 podman[220469]: 2026-01-31 08:13:28.392092568 +0000 UTC m=+0.023298241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:28 compute-0 podman[220469]: 2026-01-31 08:13:28.4943646 +0000 UTC m=+0.125570253 container init 743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brown, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:28 compute-0 podman[220469]: 2026-01-31 08:13:28.498692149 +0000 UTC m=+0.129897782 container start 743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brown, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:13:28 compute-0 stoic_brown[220509]: 167 167
Jan 31 08:13:28 compute-0 systemd[1]: libpod-743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68.scope: Deactivated successfully.
Jan 31 08:13:28 compute-0 podman[220469]: 2026-01-31 08:13:28.503627344 +0000 UTC m=+0.134832977 container attach 743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brown, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:13:28 compute-0 podman[220469]: 2026-01-31 08:13:28.50385325 +0000 UTC m=+0.135058883 container died 743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc25adf39452201b326186f704f8f3db1aef00c9a68e97ce21e3aec21de4aa20-merged.mount: Deactivated successfully.
Jan 31 08:13:28 compute-0 sudo[220576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbxwjxacofxeokolvpkydpnuffvvmsmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847208.1592364-260-16997699815628/AnsiballZ_replace.py'
Jan 31 08:13:28 compute-0 sudo[220576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:28 compute-0 podman[220469]: 2026-01-31 08:13:28.570077248 +0000 UTC m=+0.201282881 container remove 743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:13:28 compute-0 systemd[1]: libpod-conmon-743bf0be8e81c7c57f6bceb8fa4b9002169b31dab5e34731f27d86b07ab35e68.scope: Deactivated successfully.
Jan 31 08:13:28 compute-0 podman[220586]: 2026-01-31 08:13:28.69647728 +0000 UTC m=+0.038874076 container create 6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:28 compute-0 systemd[1]: Started libpod-conmon-6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1.scope.
Jan 31 08:13:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f10ff3778ce3653bcd09b9d2ece206b558112f2a9667d79869281bc44ef33e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f10ff3778ce3653bcd09b9d2ece206b558112f2a9667d79869281bc44ef33e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f10ff3778ce3653bcd09b9d2ece206b558112f2a9667d79869281bc44ef33e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f10ff3778ce3653bcd09b9d2ece206b558112f2a9667d79869281bc44ef33e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:28 compute-0 python3.9[220578]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:28 compute-0 podman[220586]: 2026-01-31 08:13:28.674800722 +0000 UTC m=+0.017197538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:28 compute-0 podman[220586]: 2026-01-31 08:13:28.772962419 +0000 UTC m=+0.115359235 container init 6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:28 compute-0 podman[220586]: 2026-01-31 08:13:28.779173296 +0000 UTC m=+0.121570092 container start 6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:13:28 compute-0 sudo[220576]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:28 compute-0 podman[220586]: 2026-01-31 08:13:28.785865776 +0000 UTC m=+0.128262592 container attach 6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:29 compute-0 sudo[220769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srvxhmwrjpmesdisddocdhqtvztgnpwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847208.8836002-268-76034340361637/AnsiballZ_replace.py'
Jan 31 08:13:29 compute-0 sudo[220769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:29 compute-0 ceph-mon[75294]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:29 compute-0 python3.9[220773]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:29 compute-0 sudo[220769]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:29 compute-0 lvm[220858]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:13:29 compute-0 lvm[220857]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:13:29 compute-0 lvm[220858]: VG ceph_vg1 finished
Jan 31 08:13:29 compute-0 lvm[220857]: VG ceph_vg0 finished
Jan 31 08:13:29 compute-0 lvm[220873]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:13:29 compute-0 lvm[220873]: VG ceph_vg2 finished
Jan 31 08:13:29 compute-0 charming_faraday[220603]: {}
Jan 31 08:13:29 compute-0 systemd[1]: libpod-6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1.scope: Deactivated successfully.
Jan 31 08:13:29 compute-0 systemd[1]: libpod-6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1.scope: Consumed 1.021s CPU time.
Jan 31 08:13:29 compute-0 podman[220586]: 2026-01-31 08:13:29.543928463 +0000 UTC m=+0.886325289 container died 6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_faraday, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f10ff3778ce3653bcd09b9d2ece206b558112f2a9667d79869281bc44ef33e7-merged.mount: Deactivated successfully.
Jan 31 08:13:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:29 compute-0 podman[220586]: 2026-01-31 08:13:29.593300085 +0000 UTC m=+0.935696881 container remove 6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:13:29 compute-0 systemd[1]: libpod-conmon-6422aaed689bac0b28ccd9efc268a47adb3cbc59000786d16bae5170aa13e3e1.scope: Deactivated successfully.
Jan 31 08:13:29 compute-0 sudo[220380]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:13:29 compute-0 sudo[221001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msmgcbpwplaguswzwayyyexdskcbsoue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847209.4348733-277-189335621136770/AnsiballZ_lineinfile.py'
Jan 31 08:13:29 compute-0 sudo[221001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:13:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:13:29 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:13:29 compute-0 sudo[221004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:13:29 compute-0 sudo[221004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:29 compute-0 sudo[221004]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:29 compute-0 python3.9[221003]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:29 compute-0 sudo[221001]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:30 compute-0 sudo[221178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbbibuezeffjftefiapnymskatthjvza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847210.0295324-277-238881836820114/AnsiballZ_lineinfile.py'
Jan 31 08:13:30 compute-0 sudo[221178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:30 compute-0 python3.9[221180]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:30 compute-0 sudo[221178]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:30 compute-0 ceph-mon[75294]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:13:30 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:13:30 compute-0 sudo[221330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkpplpjewinzitlwgvszehlhxylbafct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847210.5889552-277-172225440487360/AnsiballZ_lineinfile.py'
Jan 31 08:13:30 compute-0 sudo[221330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:30 compute-0 python3.9[221332]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:31 compute-0 sudo[221330]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:31 compute-0 sudo[221482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hopwgmoevzymhohoeucvokfpvuiixhrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847211.1272008-277-94464619994804/AnsiballZ_lineinfile.py'
Jan 31 08:13:31 compute-0 sudo[221482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:31 compute-0 python3.9[221484]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:31 compute-0 sudo[221482]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:31 compute-0 sudo[221634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voqaqwrpkguhbptokqqesosotxtyuadp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847211.70426-306-117954160246400/AnsiballZ_stat.py'
Jan 31 08:13:31 compute-0 sudo[221634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:32 compute-0 python3.9[221636]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:13:32 compute-0 sudo[221634]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:32 compute-0 sudo[221788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mintahrtiqtjfkiywqqtoxafspzyufpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847212.232331-314-44222523356711/AnsiballZ_command.py'
Jan 31 08:13:32 compute-0 sudo[221788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:32 compute-0 python3.9[221790]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:13:32 compute-0 sudo[221788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:32 compute-0 ceph-mon[75294]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:33 compute-0 sudo[221941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkuhjsqkpsffupvqqguxbtmhdbhdqmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847212.8027763-323-54052467247770/AnsiballZ_systemd_service.py'
Jan 31 08:13:33 compute-0 sudo[221941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:33 compute-0 python3.9[221943]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:33 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 31 08:13:33 compute-0 sudo[221941]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:33 compute-0 sudo[222097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yroxddzptntpulrcyqbvcdjkntxgxmgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847213.5420234-331-159320833212755/AnsiballZ_systemd_service.py'
Jan 31 08:13:33 compute-0 sudo[222097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:34 compute-0 python3.9[222099]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:34 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 08:13:34 compute-0 udevadm[222104]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 08:13:34 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 08:13:34 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 08:13:34 compute-0 multipathd[222107]: --------start up--------
Jan 31 08:13:34 compute-0 multipathd[222107]: read /etc/multipath.conf
Jan 31 08:13:34 compute-0 multipathd[222107]: path checkers start up
Jan 31 08:13:34 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 08:13:34 compute-0 sudo[222097]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:34 compute-0 sudo[222264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uednwvlxlqokrqhbxfoeaprpilhhxzia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847214.469953-343-103900605350610/AnsiballZ_file.py'
Jan 31 08:13:34 compute-0 sudo[222264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:34 compute-0 python3.9[222266]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 08:13:34 compute-0 sudo[222264]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-0 ceph-mon[75294]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:35 compute-0 sudo[222416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpcufxsaoahmybyjcprdvhtkbjvzcfko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847215.043885-351-185953323711892/AnsiballZ_modprobe.py'
Jan 31 08:13:35 compute-0 sudo[222416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:35 compute-0 python3.9[222418]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 08:13:35 compute-0 kernel: Key type psk registered
Jan 31 08:13:35 compute-0 sudo[222416]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:36 compute-0 sudo[222578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awzjahjuawujunanlrwlwulyprnejnzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847215.8844645-359-239692098538032/AnsiballZ_stat.py'
Jan 31 08:13:36 compute-0 sudo[222578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:36 compute-0 python3.9[222580]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:36 compute-0 sudo[222578]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:36 compute-0 sudo[222701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizimymukomcjorhbyaervhmlzodpspc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847215.8844645-359-239692098538032/AnsiballZ_copy.py'
Jan 31 08:13:36 compute-0 sudo[222701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:37 compute-0 python3.9[222703]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847215.8844645-359-239692098538032/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:37 compute-0 sudo[222701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:37 compute-0 sudo[222853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgbibbltcstlafhuydmipfqoqyqgsdpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847217.208202-375-136067701235149/AnsiballZ_lineinfile.py'
Jan 31 08:13:37 compute-0 sudo[222853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:37 compute-0 ceph-mon[75294]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:37 compute-0 python3.9[222855]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:37 compute-0 sudo[222853]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:38 compute-0 sudo[223005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trxcwvflsnprprfccrbueqcavrmacdie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847217.807272-383-267079367094648/AnsiballZ_systemd.py'
Jan 31 08:13:38 compute-0 sudo[223005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:38 compute-0 python3.9[223007]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:13:38 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 08:13:38 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 08:13:38 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 08:13:38 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 08:13:38 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 08:13:38 compute-0 sudo[223005]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:38 compute-0 ceph-mon[75294]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:38 compute-0 sudo[223162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikuxpwlepoupptiyrpagcpnhhrsanhpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847218.607904-391-8689494432373/AnsiballZ_dnf.py'
Jan 31 08:13:38 compute-0 sudo[223162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:39 compute-0 python3.9[223164]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:13:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:40 compute-0 ceph-mon[75294]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:41 compute-0 systemd[1]: Reloading.
Jan 31 08:13:41 compute-0 systemd-sysv-generator[223191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:41 compute-0 systemd-rc-local-generator[223187]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:41 compute-0 systemd[1]: Reloading.
Jan 31 08:13:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:41 compute-0 systemd-rc-local-generator[223232]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:41 compute-0 systemd-sysv-generator[223235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:41 compute-0 systemd-logind[810]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 08:13:42 compute-0 systemd-logind[810]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 08:13:42 compute-0 lvm[223282]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:13:42 compute-0 lvm[223282]: VG ceph_vg0 finished
Jan 31 08:13:42 compute-0 lvm[223281]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:13:42 compute-0 lvm[223281]: VG ceph_vg2 finished
Jan 31 08:13:42 compute-0 lvm[223280]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:13:42 compute-0 lvm[223280]: VG ceph_vg1 finished
Jan 31 08:13:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:13:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:13:42 compute-0 systemd[1]: Reloading.
Jan 31 08:13:42 compute-0 systemd-rc-local-generator[223333]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:42 compute-0 systemd-sysv-generator[223336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:42 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:13:43 compute-0 ceph-mon[75294]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:43 compute-0 sudo[223162]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:43 compute-0 sudo[224632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idytomtcgccrxafeeuiuxhdbkrpwroht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847223.7603726-399-177143622777076/AnsiballZ_systemd_service.py'
Jan 31 08:13:43 compute-0 sudo[224632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:44 compute-0 podman[224635]: 2026-01-31 08:13:44.212108431 +0000 UTC m=+0.077886565 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 08:13:44 compute-0 python3.9[224634]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:13:44 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 31 08:13:44 compute-0 iscsid[217518]: iscsid shutting down.
Jan 31 08:13:44 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 08:13:44 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 31 08:13:44 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 08:13:44 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 08:13:44 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 08:13:44 compute-0 sudo[224632]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:44 compute-0 ceph-mon[75294]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:44 compute-0 sudo[224814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcdaxflknnnaiydhvvnkphuwpsjthyft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847224.5267892-407-141822111369215/AnsiballZ_systemd_service.py'
Jan 31 08:13:44 compute-0 sudo[224814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:45 compute-0 python3.9[224816]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:13:45 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 08:13:45 compute-0 multipathd[222107]: exit (signal)
Jan 31 08:13:45 compute-0 multipathd[222107]: --------shut down-------
Jan 31 08:13:45 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 08:13:45 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 08:13:45 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 08:13:45 compute-0 multipathd[224822]: --------start up--------
Jan 31 08:13:45 compute-0 multipathd[224822]: read /etc/multipath.conf
Jan 31 08:13:45 compute-0 multipathd[224822]: path checkers start up
Jan 31 08:13:45 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 08:13:45 compute-0 sudo[224814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:46 compute-0 python3.9[224979]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:13:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:13:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:13:46 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.135s CPU time.
Jan 31 08:13:46 compute-0 systemd[1]: run-r74c152eb944c4545bf65841fa3af15cd.service: Deactivated successfully.
Jan 31 08:13:46 compute-0 sudo[225134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkviedrhfsyxjiecxjgigodqzhvknti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847226.472571-425-88648647694461/AnsiballZ_file.py'
Jan 31 08:13:46 compute-0 sudo[225134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:46 compute-0 python3.9[225136]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:13:46.954 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:13:46.955 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:46 compute-0 sudo[225134]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:13:46.956 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:46 compute-0 ceph-mon[75294]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:47 compute-0 sudo[225286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgqhcauklvtcjmyyaxhbaydnhkmniye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847227.2459276-436-174392135307007/AnsiballZ_systemd_service.py'
Jan 31 08:13:47 compute-0 sudo[225286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:47 compute-0 python3.9[225288]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:13:47 compute-0 systemd[1]: Reloading.
Jan 31 08:13:48 compute-0 systemd-sysv-generator[225321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:48 compute-0 systemd-rc-local-generator[225316]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:48 compute-0 sudo[225286]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:48 compute-0 python3.9[225474]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:13:48 compute-0 network[225491]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:13:48 compute-0 network[225492]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:13:48 compute-0 network[225493]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:13:49 compute-0 ceph-mon[75294]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:49 compute-0 podman[225512]: 2026-01-31 08:13:49.853008451 +0000 UTC m=+0.091960861 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:13:50
Jan 31 08:13:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:13:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:13:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta']
Jan 31 08:13:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:13:51 compute-0 ceph-mon[75294]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:52 compute-0 sudo[225782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wolynpdopkpaqfkxsrwdfwpdlfdtzqrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847231.9104514-455-63395586574011/AnsiballZ_systemd_service.py'
Jan 31 08:13:52 compute-0 sudo[225782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:52 compute-0 python3.9[225784]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:52 compute-0 sudo[225782]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:52 compute-0 sudo[225935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxudqqumnnnwfdxobqraxdnxwysowqlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847232.7355673-455-168532155434804/AnsiballZ_systemd_service.py'
Jan 31 08:13:52 compute-0 sudo[225935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:53 compute-0 python3.9[225937]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:53 compute-0 sudo[225935]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:53 compute-0 ceph-mon[75294]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:53 compute-0 sudo[226088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjeqqpqtkszzyerldpmfqprugugnvwpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847233.4308264-455-98182084288104/AnsiballZ_systemd_service.py'
Jan 31 08:13:53 compute-0 sudo[226088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:53 compute-0 python3.9[226090]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:53 compute-0 sudo[226088]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:54 compute-0 sudo[226241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eknrtvisrorskvflhdzlnsomdcfskbhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847234.075127-455-155303694171492/AnsiballZ_systemd_service.py'
Jan 31 08:13:54 compute-0 sudo[226241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:54 compute-0 ceph-mon[75294]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:54 compute-0 python3.9[226243]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:54 compute-0 sudo[226241]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:55 compute-0 sudo[226394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlvkhkfhvcfvmqftnrqgglbaceiipnwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847234.8291671-455-75429836898380/AnsiballZ_systemd_service.py'
Jan 31 08:13:55 compute-0 sudo[226394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:55 compute-0 python3.9[226396]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:55 compute-0 sudo[226394]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:13:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:13:55 compute-0 sudo[226547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czrkyoeqeqzpvyznkaakpisshrejnfmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847235.6408913-455-20773223662938/AnsiballZ_systemd_service.py'
Jan 31 08:13:55 compute-0 sudo[226547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:56 compute-0 python3.9[226549]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:56 compute-0 sudo[226547]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:56 compute-0 sudo[226700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckdntdfewuxsghnhxsedyojyapwlcjlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847236.3713129-455-114246998821134/AnsiballZ_systemd_service.py'
Jan 31 08:13:56 compute-0 sudo[226700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:56 compute-0 python3.9[226702]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:56 compute-0 sudo[226700]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:57 compute-0 ceph-mon[75294]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:57 compute-0 sudo[226853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muirlbjqrrdomecqgfauxwfocwzlhkju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847237.0614498-455-217468755838980/AnsiballZ_systemd_service.py'
Jan 31 08:13:57 compute-0 sudo[226853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:57 compute-0 python3.9[226855]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:57 compute-0 sudo[226853]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:58 compute-0 sudo[227006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urwpqegvdvrcotlwotucxzkqqjkjqlcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847238.0172803-514-130603924817915/AnsiballZ_file.py'
Jan 31 08:13:58 compute-0 sudo[227006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:58 compute-0 python3.9[227008]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:58 compute-0 sudo[227006]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:58 compute-0 sudo[227158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adshjlxcyfbwxsexjngxnwftrjodhybo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847238.5580451-514-265841313193969/AnsiballZ_file.py'
Jan 31 08:13:58 compute-0 sudo[227158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:59 compute-0 python3.9[227160]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:59 compute-0 sudo[227158]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:59 compute-0 sudo[227310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prmdhhagakdxgnabbfxapjkmslidcikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847239.183718-514-170792678352117/AnsiballZ_file.py'
Jan 31 08:13:59 compute-0 sudo[227310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:59 compute-0 ceph-mon[75294]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:59 compute-0 python3.9[227312]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:59 compute-0 sudo[227310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:59 compute-0 sudo[227462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eolwakkpxfqizvjraserwnijgqfywbez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847239.7191522-514-263810406626337/AnsiballZ_file.py'
Jan 31 08:13:59 compute-0 sudo[227462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:00 compute-0 python3.9[227464]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:00 compute-0 sudo[227462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:00 compute-0 sudo[227614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcdoefltuczdcizjlbepijxkwgaeyuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847240.4095669-514-73456428871262/AnsiballZ_file.py'
Jan 31 08:14:00 compute-0 sudo[227614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:00 compute-0 ceph-mon[75294]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:00 compute-0 python3.9[227616]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:00 compute-0 sudo[227614]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:01 compute-0 sudo[227766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjvbwtfegbotjboobezvtxfpdhfenouo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847241.0053616-514-105132704711768/AnsiballZ_file.py'
Jan 31 08:14:01 compute-0 sudo[227766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:01 compute-0 python3.9[227768]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:01 compute-0 sudo[227766]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:01 compute-0 sudo[227918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdwtbvskblobkmpsfflorpxmrgigcbml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847241.549077-514-175977811247027/AnsiballZ_file.py'
Jan 31 08:14:01 compute-0 sudo[227918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:02 compute-0 python3.9[227920]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:02 compute-0 sudo[227918]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:02 compute-0 sudo[228070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhjdszxtkpaypqetljjwzeweylweyvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847242.1525507-514-55634427468229/AnsiballZ_file.py'
Jan 31 08:14:02 compute-0 sudo[228070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:02 compute-0 python3.9[228072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:02 compute-0 sudo[228070]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:03 compute-0 sudo[228222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxzcvxqfkhfrbbkvvzgkocgkbqkjjbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847242.7903621-571-199773440161645/AnsiballZ_file.py'
Jan 31 08:14:03 compute-0 ceph-mon[75294]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:03 compute-0 sudo[228222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:03 compute-0 python3.9[228224]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:03 compute-0 sudo[228222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:03 compute-0 sudo[228374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnrnltgoltixemwkghdoghyyfuquxqvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847243.3704512-571-13951179049546/AnsiballZ_file.py'
Jan 31 08:14:03 compute-0 sudo[228374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:03 compute-0 python3.9[228376]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:03 compute-0 sudo[228374]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:04 compute-0 sudo[228526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvmknrpchjchssojjixgshvehqdtdeeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847243.9130588-571-160456906008449/AnsiballZ_file.py'
Jan 31 08:14:04 compute-0 sudo[228526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:04 compute-0 python3.9[228528]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:04 compute-0 sudo[228526]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:04 compute-0 sudo[228678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yamcmurfggvizoaoxagrtxjkrcxqcuuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847244.5114002-571-230334194125582/AnsiballZ_file.py'
Jan 31 08:14:04 compute-0 sudo[228678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:05 compute-0 python3.9[228680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:05 compute-0 sudo[228678]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:05 compute-0 ceph-mon[75294]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:05 compute-0 sudo[228830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptoftbqsjbifpxpfeybrbysceuoxevkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847245.127916-571-248355116656350/AnsiballZ_file.py'
Jan 31 08:14:05 compute-0 sudo[228830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:05 compute-0 python3.9[228832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:05 compute-0 sudo[228830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:05 compute-0 sudo[228982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afpxhtiazhlhjtggwjuqquptrjidakrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847245.7291937-571-15032427433011/AnsiballZ_file.py'
Jan 31 08:14:05 compute-0 sudo[228982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:06 compute-0 python3.9[228984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:06 compute-0 sudo[228982]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:14:06 compute-0 sudo[229134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcuyjjxkihisfrqwultyifrfditigvor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847246.3120518-571-219594412528873/AnsiballZ_file.py'
Jan 31 08:14:06 compute-0 sudo[229134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.591358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847246591445, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1725, "num_deletes": 250, "total_data_size": 2943909, "memory_usage": 2992272, "flush_reason": "Manual Compaction"}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847246608462, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1674361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11911, "largest_seqno": 13635, "table_properties": {"data_size": 1668661, "index_size": 2839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14233, "raw_average_key_size": 20, "raw_value_size": 1656111, "raw_average_value_size": 2339, "num_data_blocks": 132, "num_entries": 708, "num_filter_entries": 708, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847054, "oldest_key_time": 1769847054, "file_creation_time": 1769847246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 17135 microseconds, and 4242 cpu microseconds.
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.608518) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1674361 bytes OK
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.608537) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.614756) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.614787) EVENT_LOG_v1 {"time_micros": 1769847246614782, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.614808) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2936552, prev total WAL file size 2936552, number of live WAL files 2.
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.615496) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1635KB)], [29(8116KB)]
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847246615564, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9985331, "oldest_snapshot_seqno": -1}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4067 keys, 7855528 bytes, temperature: kUnknown
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847246685985, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7855528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7826456, "index_size": 17826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96745, "raw_average_key_size": 23, "raw_value_size": 7751244, "raw_average_value_size": 1905, "num_data_blocks": 771, "num_entries": 4067, "num_filter_entries": 4067, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769847246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.686193) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7855528 bytes
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.690310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.7 rd, 111.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(10.7) write-amplify(4.7) OK, records in: 4488, records dropped: 421 output_compression: NoCompression
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.690335) EVENT_LOG_v1 {"time_micros": 1769847246690324, "job": 12, "event": "compaction_finished", "compaction_time_micros": 70478, "compaction_time_cpu_micros": 16144, "output_level": 6, "num_output_files": 1, "total_output_size": 7855528, "num_input_records": 4488, "num_output_records": 4067, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847246690626, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847246691444, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.615427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.691475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.691480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.691482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.691484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:14:06.691486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:06 compute-0 python3.9[229136]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:06 compute-0 sudo[229134]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:07 compute-0 sudo[229286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmephrpohdvsfsdymsvnazlyrcwsjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847246.898556-571-211143431516496/AnsiballZ_file.py'
Jan 31 08:14:07 compute-0 sudo[229286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:07 compute-0 python3.9[229288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:07 compute-0 sudo[229286]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:07 compute-0 ceph-mon[75294]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:07 compute-0 sudo[229438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibkijorpnunssozpfgslzrcpvbibvqze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847247.6279218-629-269771889332946/AnsiballZ_command.py'
Jan 31 08:14:07 compute-0 sudo[229438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:08 compute-0 python3.9[229440]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:08 compute-0 sudo[229438]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:08 compute-0 ceph-mon[75294]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:08 compute-0 python3.9[229592]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:14:09 compute-0 sudo[229742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urnmefkwkleihvoqdfmeasoluaoleglo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847249.1037042-647-226137544919918/AnsiballZ_systemd_service.py'
Jan 31 08:14:09 compute-0 sudo[229742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:09 compute-0 python3.9[229744]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:14:09 compute-0 systemd[1]: Reloading.
Jan 31 08:14:09 compute-0 systemd-rc-local-generator[229770]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:14:09 compute-0 systemd-sysv-generator[229773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:14:10 compute-0 sudo[229742]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:10 compute-0 sudo[229928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nciiqqspspowmqkdksaoiboyltxwpfan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847250.541374-655-23792463576577/AnsiballZ_command.py'
Jan 31 08:14:10 compute-0 sudo[229928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:11 compute-0 python3.9[229930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:11 compute-0 sudo[229928]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:11 compute-0 sudo[230081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgndrcsgyzmlpehswgjkbwjwbvuxlhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847251.1560156-655-237873807117235/AnsiballZ_command.py'
Jan 31 08:14:11 compute-0 sudo[230081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:11 compute-0 ceph-mon[75294]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:11 compute-0 python3.9[230083]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:11 compute-0 sudo[230081]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:12 compute-0 sudo[230234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtqhjpzqnjxvwvlezizarhevyztdckx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847251.7713003-655-165587547457143/AnsiballZ_command.py'
Jan 31 08:14:12 compute-0 sudo[230234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:12 compute-0 python3.9[230236]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:12 compute-0 sudo[230234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:12 compute-0 sudo[230387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kegdjxzhubprdqsyrgpwdzdelotnbgkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847252.4639192-655-197953006193809/AnsiballZ_command.py'
Jan 31 08:14:12 compute-0 sudo[230387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:12 compute-0 python3.9[230389]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:12 compute-0 sudo[230387]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:13 compute-0 sudo[230540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szaatvrahlzonvyixectzuozmbjajxdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847252.9922955-655-169163444339094/AnsiballZ_command.py'
Jan 31 08:14:13 compute-0 sudo[230540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:13 compute-0 python3.9[230542]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:13 compute-0 sudo[230540]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:13 compute-0 ceph-mon[75294]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:13 compute-0 sudo[230693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxhkjavlihjjmvtitxxxdwywaghwlye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847253.564178-655-210483013453590/AnsiballZ_command.py'
Jan 31 08:14:13 compute-0 sudo[230693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:13 compute-0 python3.9[230695]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:13 compute-0 sudo[230693]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:14 compute-0 sudo[230859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsnrmfwqgtifwcxhdfoqknqsadubowpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847254.1042933-655-234220179531329/AnsiballZ_command.py'
Jan 31 08:14:14 compute-0 sudo[230859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:14 compute-0 podman[230820]: 2026-01-31 08:14:14.395992095 +0000 UTC m=+0.084809345 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:14:14 compute-0 python3.9[230867]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:14 compute-0 sudo[230859]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:14 compute-0 ceph-mon[75294]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:15 compute-0 sudo[231025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kukskdbqqqeczcdqpubjglycgathwrtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847254.8547568-655-264725503728717/AnsiballZ_command.py'
Jan 31 08:14:15 compute-0 sudo[231025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:15 compute-0 python3.9[231027]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:15 compute-0 sudo[231025]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:15 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 08:14:16 compute-0 sudo[231179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkvkvfnvfuwjmvncrchgpgahrkjsrwjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847256.226824-734-235731991878873/AnsiballZ_file.py'
Jan 31 08:14:16 compute-0 sudo[231179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:16 compute-0 python3.9[231181]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:16 compute-0 sudo[231179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:16 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 08:14:17 compute-0 sudo[231332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyreyecowweubcopibvfktffoodprdap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847256.8392138-734-113871651245304/AnsiballZ_file.py'
Jan 31 08:14:17 compute-0 sudo[231332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:17 compute-0 ceph-mon[75294]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:17 compute-0 python3.9[231334]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:17 compute-0 sudo[231332]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:17 compute-0 sudo[231484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqjrqcnmntjyaazvomiguzephqrnltam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847257.4110777-734-156882752479153/AnsiballZ_file.py'
Jan 31 08:14:17 compute-0 sudo[231484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:17 compute-0 python3.9[231486]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:17 compute-0 sudo[231484]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:18 compute-0 sudo[231636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zryifxmxmulkllgzyiyxsnlxmwytwedt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847257.930529-756-59358040456247/AnsiballZ_file.py'
Jan 31 08:14:18 compute-0 sudo[231636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:18 compute-0 python3.9[231638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:18 compute-0 sudo[231636]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:18 compute-0 sudo[231788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txwohbwmvsbpcsnbczguklmuiukfppkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847258.4423068-756-146797228282550/AnsiballZ_file.py'
Jan 31 08:14:18 compute-0 sudo[231788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:18 compute-0 python3.9[231790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:18 compute-0 sudo[231788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:19 compute-0 sudo[231940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktgltybinymwypupumqoqvpcjoywczsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847259.00985-756-247801479214312/AnsiballZ_file.py'
Jan 31 08:14:19 compute-0 sudo[231940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:19 compute-0 ceph-mon[75294]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:19 compute-0 python3.9[231942]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:19 compute-0 sudo[231940]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:19 compute-0 sudo[232092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcconupjeqquwrmfzewwzkmwfpwhqgaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847259.5491233-756-281018325602798/AnsiballZ_file.py'
Jan 31 08:14:19 compute-0 sudo[232092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:19 compute-0 python3.9[232094]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:19 compute-0 sudo[232092]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:20 compute-0 podman[232165]: 2026-01-31 08:14:20.168451495 +0000 UTC m=+0.044145626 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 08:14:20 compute-0 sudo[232262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjsdzfrzkbcxmvajrxihkyijreowapx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847260.0729704-756-181486718043037/AnsiballZ_file.py'
Jan 31 08:14:20 compute-0 sudo[232262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:20 compute-0 python3.9[232264]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:20 compute-0 sudo[232262]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:21 compute-0 sudo[232414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvonfddlyvywgbfjzkcaulgubwerqlpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847260.7987702-756-118359360178413/AnsiballZ_file.py'
Jan 31 08:14:21 compute-0 sudo[232414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:21 compute-0 python3.9[232416]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:21 compute-0 sudo[232414]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:21 compute-0 ceph-mon[75294]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:21 compute-0 sudo[232566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmnpuvoffnhzhjbaqwpmjeisnhdwlkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847261.432823-756-22080658769997/AnsiballZ_file.py'
Jan 31 08:14:21 compute-0 sudo[232566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:21 compute-0 python3.9[232568]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:21 compute-0 sudo[232566]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:23 compute-0 ceph-mon[75294]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:24 compute-0 ceph-mon[75294]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:24 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 08:14:24 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 08:14:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:26 compute-0 sudo[232720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kotjwttkczoddyqhveqrjuthpahpstjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847266.4635258-945-24388979670174/AnsiballZ_getent.py'
Jan 31 08:14:26 compute-0 sudo[232720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:27 compute-0 ceph-mon[75294]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:27 compute-0 python3.9[232722]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 08:14:27 compute-0 sudo[232720]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:27 compute-0 sudo[232873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlohxurjqgmcdwlooehzzrhjlzldbtco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847267.4487734-953-180095987906635/AnsiballZ_group.py'
Jan 31 08:14:27 compute-0 sudo[232873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:28 compute-0 python3.9[232875]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:14:28 compute-0 groupadd[232876]: group added to /etc/group: name=nova, GID=42436
Jan 31 08:14:28 compute-0 groupadd[232876]: group added to /etc/gshadow: name=nova
Jan 31 08:14:28 compute-0 groupadd[232876]: new group: name=nova, GID=42436
Jan 31 08:14:28 compute-0 sudo[232873]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:29 compute-0 sudo[233031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frpavzdmapqxxwdlhsoadoeujazqeqfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847268.980343-961-275210736541430/AnsiballZ_user.py'
Jan 31 08:14:29 compute-0 sudo[233031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:29 compute-0 ceph-mon[75294]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:29 compute-0 python3.9[233033]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 08:14:29 compute-0 useradd[233035]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 31 08:14:29 compute-0 sudo[233036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:29 compute-0 sudo[233036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:29 compute-0 sudo[233036]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:29 compute-0 sudo[233061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:14:29 compute-0 sudo[233061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:29 compute-0 useradd[233035]: add 'nova' to group 'libvirt'
Jan 31 08:14:29 compute-0 useradd[233035]: add 'nova' to shadow group 'libvirt'
Jan 31 08:14:30 compute-0 sudo[233061]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:30 compute-0 ceph-mon[75294]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:30 compute-0 sudo[233031]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:14:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:14:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:14:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:14:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:14:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:14:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:14:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:14:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:14:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:14:30 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:30 compute-0 sudo[233147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:30 compute-0 sudo[233147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:30 compute-0 sudo[233147]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:30 compute-0 sudo[233172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:14:30 compute-0 sudo[233172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:31 compute-0 podman[233209]: 2026-01-31 08:14:31.260173377 +0000 UTC m=+0.034806986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:31 compute-0 sshd-session[233223]: Accepted publickey for zuul from 192.168.122.30 port 49138 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:14:31 compute-0 systemd-logind[810]: New session 51 of user zuul.
Jan 31 08:14:31 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 31 08:14:31 compute-0 sshd-session[233223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:14:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:31 compute-0 podman[233209]: 2026-01-31 08:14:31.688263341 +0000 UTC m=+0.462896870 container create ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_greider, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:14:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:14:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:14:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:14:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:14:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:31 compute-0 sshd-session[233226]: Received disconnect from 192.168.122.30 port 49138:11: disconnected by user
Jan 31 08:14:31 compute-0 sshd-session[233226]: Disconnected from user zuul 192.168.122.30 port 49138
Jan 31 08:14:31 compute-0 sshd-session[233223]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:14:31 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 08:14:31 compute-0 systemd-logind[810]: Session 51 logged out. Waiting for processes to exit.
Jan 31 08:14:31 compute-0 systemd-logind[810]: Removed session 51.
Jan 31 08:14:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:31 compute-0 systemd[1]: Started libpod-conmon-ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47.scope.
Jan 31 08:14:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:32 compute-0 podman[233209]: 2026-01-31 08:14:32.047875676 +0000 UTC m=+0.822509225 container init ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:14:32 compute-0 podman[233209]: 2026-01-31 08:14:32.054193712 +0000 UTC m=+0.828827241 container start ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:14:32 compute-0 sharp_greider[233305]: 167 167
Jan 31 08:14:32 compute-0 systemd[1]: libpod-ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47.scope: Deactivated successfully.
Jan 31 08:14:32 compute-0 podman[233209]: 2026-01-31 08:14:32.201071573 +0000 UTC m=+0.975705122 container attach ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:14:32 compute-0 podman[233209]: 2026-01-31 08:14:32.201796282 +0000 UTC m=+0.976429831 container died ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_greider, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:32 compute-0 python3.9[233389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:32 compute-0 python3.9[233515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847271.858922-986-28293907860335/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5518fc8aed180e5a684fefa2f3f71101a2c2beef488ae4c1766f0d5743a5fb4-merged.mount: Deactivated successfully.
Jan 31 08:14:33 compute-0 ceph-mon[75294]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:33 compute-0 python3.9[233666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:33 compute-0 podman[233209]: 2026-01-31 08:14:33.29246709 +0000 UTC m=+2.067100619 container remove ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:33 compute-0 systemd[1]: libpod-conmon-ba5fd63145c5dc2f6c7f24bbd6712663760688bc860a453038ea875dc3e24c47.scope: Deactivated successfully.
Jan 31 08:14:33 compute-0 podman[233676]: 2026-01-31 08:14:33.411624422 +0000 UTC m=+0.028055519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:33 compute-0 podman[233676]: 2026-01-31 08:14:33.541918034 +0000 UTC m=+0.158349061 container create 71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pascal, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:14:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:33 compute-0 systemd[1]: Started libpod-conmon-71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970.scope.
Jan 31 08:14:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fc909403a92488f2d655546c79ecb448c9f879efc1b2b6e8082f281e0bb506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fc909403a92488f2d655546c79ecb448c9f879efc1b2b6e8082f281e0bb506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fc909403a92488f2d655546c79ecb448c9f879efc1b2b6e8082f281e0bb506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fc909403a92488f2d655546c79ecb448c9f879efc1b2b6e8082f281e0bb506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fc909403a92488f2d655546c79ecb448c9f879efc1b2b6e8082f281e0bb506/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:33 compute-0 python3.9[233763]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:33 compute-0 podman[233676]: 2026-01-31 08:14:33.801736534 +0000 UTC m=+0.418167561 container init 71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pascal, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:14:33 compute-0 podman[233676]: 2026-01-31 08:14:33.809635683 +0000 UTC m=+0.426066720 container start 71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:14:33 compute-0 podman[233676]: 2026-01-31 08:14:33.968826444 +0000 UTC m=+0.585257491 container attach 71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:14:34 compute-0 modest_pascal[233767]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:14:34 compute-0 modest_pascal[233767]: --> All data devices are unavailable
Jan 31 08:14:34 compute-0 systemd[1]: libpod-71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970.scope: Deactivated successfully.
Jan 31 08:14:34 compute-0 podman[233676]: 2026-01-31 08:14:34.297487823 +0000 UTC m=+0.913918900 container died 71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pascal, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:14:34 compute-0 python3.9[233930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-94fc909403a92488f2d655546c79ecb448c9f879efc1b2b6e8082f281e0bb506-merged.mount: Deactivated successfully.
Jan 31 08:14:34 compute-0 podman[233676]: 2026-01-31 08:14:34.715207691 +0000 UTC m=+1.331638718 container remove 71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:14:34 compute-0 systemd[1]: libpod-conmon-71d9e2faeccb7223c6f43043681f85f4d3df5d5e793d94e9b5de20b44800c970.scope: Deactivated successfully.
Jan 31 08:14:34 compute-0 sudo[233172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:34 compute-0 sudo[234072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:34 compute-0 sudo[234072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:34 compute-0 sudo[234072]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:34 compute-0 python3.9[234071]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847273.8980446-986-82765795847263/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:34 compute-0 sudo[234097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:14:34 compute-0 sudo[234097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:35 compute-0 podman[234216]: 2026-01-31 08:14:35.098644807 +0000 UTC m=+0.021036784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:35 compute-0 ceph-mon[75294]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:35 compute-0 podman[234216]: 2026-01-31 08:14:35.2322554 +0000 UTC m=+0.154647327 container create 9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:14:35 compute-0 python3.9[234296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:35 compute-0 systemd[1]: Started libpod-conmon-9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba.scope.
Jan 31 08:14:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:35 compute-0 podman[234216]: 2026-01-31 08:14:35.527703759 +0000 UTC m=+0.450095666 container init 9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:14:35 compute-0 podman[234216]: 2026-01-31 08:14:35.537346446 +0000 UTC m=+0.459738383 container start 9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:14:35 compute-0 naughty_cori[234315]: 167 167
Jan 31 08:14:35 compute-0 systemd[1]: libpod-9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba.scope: Deactivated successfully.
Jan 31 08:14:35 compute-0 podman[234216]: 2026-01-31 08:14:35.577384505 +0000 UTC m=+0.499776422 container attach 9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cori, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:14:35 compute-0 podman[234216]: 2026-01-31 08:14:35.580296295 +0000 UTC m=+0.502688392 container died 9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cori, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:14:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-78f674f601f9fb6f77670020a0927448f5fb4043fd272795a9e29f26490dcdda-merged.mount: Deactivated successfully.
Jan 31 08:14:35 compute-0 python3.9[234438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847274.9723566-986-252666738400950/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:35 compute-0 podman[234216]: 2026-01-31 08:14:35.992548621 +0000 UTC m=+0.914940528 container remove 9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:14:35 compute-0 systemd[1]: libpod-conmon-9ff7307ebfc326b0e7e505977940df6222ed77d7effacbd3317e5ef0607d05ba.scope: Deactivated successfully.
Jan 31 08:14:36 compute-0 podman[234522]: 2026-01-31 08:14:36.113428371 +0000 UTC m=+0.022971557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:36 compute-0 podman[234522]: 2026-01-31 08:14:36.211781657 +0000 UTC m=+0.121324793 container create 348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:14:36 compute-0 systemd[1]: Started libpod-conmon-348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116.scope.
Jan 31 08:14:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54920f583e71c82879f05430f60cd57a36e8681fcd5b468980e4f1e8ffc12632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54920f583e71c82879f05430f60cd57a36e8681fcd5b468980e4f1e8ffc12632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54920f583e71c82879f05430f60cd57a36e8681fcd5b468980e4f1e8ffc12632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54920f583e71c82879f05430f60cd57a36e8681fcd5b468980e4f1e8ffc12632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:36 compute-0 python3.9[234609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:36 compute-0 podman[234522]: 2026-01-31 08:14:36.478669854 +0000 UTC m=+0.388213020 container init 348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:14:36 compute-0 podman[234522]: 2026-01-31 08:14:36.484033273 +0000 UTC m=+0.393576399 container start 348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:14:36 compute-0 podman[234522]: 2026-01-31 08:14:36.564015709 +0000 UTC m=+0.473558845 container attach 348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]: {
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:     "0": [
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:         {
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "devices": [
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "/dev/loop3"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             ],
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_name": "ceph_lv0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_size": "21470642176",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "name": "ceph_lv0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "tags": {
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cluster_name": "ceph",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.crush_device_class": "",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.encrypted": "0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.objectstore": "bluestore",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osd_id": "0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.type": "block",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.vdo": "0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.with_tpm": "0"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             },
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "type": "block",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "vg_name": "ceph_vg0"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:         }
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:     ],
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:     "1": [
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:         {
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "devices": [
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "/dev/loop4"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             ],
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_name": "ceph_lv1",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_size": "21470642176",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "name": "ceph_lv1",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "tags": {
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cluster_name": "ceph",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.crush_device_class": "",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.encrypted": "0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.objectstore": "bluestore",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osd_id": "1",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.type": "block",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.vdo": "0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.with_tpm": "0"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             },
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "type": "block",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "vg_name": "ceph_vg1"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:         }
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:     ],
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:     "2": [
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:         {
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "devices": [
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "/dev/loop5"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             ],
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_name": "ceph_lv2",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_size": "21470642176",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "name": "ceph_lv2",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "tags": {
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.cluster_name": "ceph",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.crush_device_class": "",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.encrypted": "0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.objectstore": "bluestore",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osd_id": "2",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.type": "block",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.vdo": "0",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:                 "ceph.with_tpm": "0"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             },
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "type": "block",
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:             "vg_name": "ceph_vg2"
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:         }
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]:     ]
Jan 31 08:14:36 compute-0 nervous_stonebraker[234613]: }
Jan 31 08:14:36 compute-0 systemd[1]: libpod-348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116.scope: Deactivated successfully.
Jan 31 08:14:36 compute-0 podman[234522]: 2026-01-31 08:14:36.776860468 +0000 UTC m=+0.686403604 container died 348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:14:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-54920f583e71c82879f05430f60cd57a36e8681fcd5b468980e4f1e8ffc12632-merged.mount: Deactivated successfully.
Jan 31 08:14:36 compute-0 python3.9[234742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847276.0237145-986-239154529272609/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:37 compute-0 podman[234522]: 2026-01-31 08:14:37.022132625 +0000 UTC m=+0.931675761 container remove 348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:14:37 compute-0 systemd[1]: libpod-conmon-348bad3798b9befc370a22d371f54a79677c30a8edb852d3abaca6ab87c2c116.scope: Deactivated successfully.
Jan 31 08:14:37 compute-0 sudo[234097]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:37 compute-0 sudo[234778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:37 compute-0 sudo[234778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:37 compute-0 sudo[234778]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:37 compute-0 sudo[234827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:14:37 compute-0 sudo[234827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:37 compute-0 ceph-mon[75294]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:37 compute-0 podman[234965]: 2026-01-31 08:14:37.459711053 +0000 UTC m=+0.110897425 container create 1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:14:37 compute-0 podman[234965]: 2026-01-31 08:14:37.366608632 +0000 UTC m=+0.017795024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:37 compute-0 systemd[1]: Started libpod-conmon-1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a.scope.
Jan 31 08:14:37 compute-0 python3.9[234961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:37 compute-0 podman[234965]: 2026-01-31 08:14:37.632927523 +0000 UTC m=+0.284113915 container init 1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 08:14:37 compute-0 podman[234965]: 2026-01-31 08:14:37.639015232 +0000 UTC m=+0.290201604 container start 1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:14:37 compute-0 flamboyant_chatterjee[234981]: 167 167
Jan 31 08:14:37 compute-0 systemd[1]: libpod-1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a.scope: Deactivated successfully.
Jan 31 08:14:37 compute-0 podman[234965]: 2026-01-31 08:14:37.750592634 +0000 UTC m=+0.401779026 container attach 1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:14:37 compute-0 podman[234965]: 2026-01-31 08:14:37.750951375 +0000 UTC m=+0.402137747 container died 1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chatterjee, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6278ef6ffa2fe499f7cdce5b383c9428c3d6c163018419c4013a049006a1f6e-merged.mount: Deactivated successfully.
Jan 31 08:14:38 compute-0 python3.9[235117]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847277.0904336-986-217106475447642/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:38 compute-0 podman[234965]: 2026-01-31 08:14:38.310947705 +0000 UTC m=+0.962134077 container remove 1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:38 compute-0 systemd[1]: libpod-conmon-1ba2795b1bd0cea7a22fd1d3693e186e8f3b499ad06507192237c6b25f0e684a.scope: Deactivated successfully.
Jan 31 08:14:38 compute-0 sudo[235289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mchbbczzuraebuovkbnykchkiezezxyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847278.2160904-1069-230415111070464/AnsiballZ_file.py'
Jan 31 08:14:38 compute-0 sudo[235289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:38 compute-0 podman[235249]: 2026-01-31 08:14:38.50492525 +0000 UTC m=+0.111161591 container create ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mayer, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:14:38 compute-0 podman[235249]: 2026-01-31 08:14:38.415309926 +0000 UTC m=+0.021546297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:38 compute-0 ceph-mon[75294]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:38 compute-0 systemd[1]: Started libpod-conmon-ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96.scope.
Jan 31 08:14:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3ccf2afff8db09f83e1d687ae106cbc53a05455e52bc01e4cd0c524f7e949e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3ccf2afff8db09f83e1d687ae106cbc53a05455e52bc01e4cd0c524f7e949e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3ccf2afff8db09f83e1d687ae106cbc53a05455e52bc01e4cd0c524f7e949e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3ccf2afff8db09f83e1d687ae106cbc53a05455e52bc01e4cd0c524f7e949e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:38 compute-0 podman[235249]: 2026-01-31 08:14:38.682206654 +0000 UTC m=+0.288443005 container init ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:14:38 compute-0 python3.9[235291]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:38 compute-0 podman[235249]: 2026-01-31 08:14:38.688194089 +0000 UTC m=+0.294430410 container start ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:14:38 compute-0 sudo[235289]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:38 compute-0 podman[235249]: 2026-01-31 08:14:38.791578455 +0000 UTC m=+0.397814796 container attach ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:14:39 compute-0 sudo[235483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acvefjwyolvxwskgdnkimfrcantchjih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847278.8371084-1077-161837906079504/AnsiballZ_copy.py'
Jan 31 08:14:39 compute-0 sudo[235483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:39 compute-0 python3.9[235490]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:39 compute-0 sudo[235483]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:39 compute-0 lvm[235527]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:14:39 compute-0 lvm[235527]: VG ceph_vg1 finished
Jan 31 08:14:39 compute-0 lvm[235529]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:14:39 compute-0 lvm[235529]: VG ceph_vg2 finished
Jan 31 08:14:39 compute-0 lvm[235526]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:14:39 compute-0 lvm[235526]: VG ceph_vg0 finished
Jan 31 08:14:39 compute-0 lvm[235555]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:14:39 compute-0 lvm[235555]: VG ceph_vg2 finished
Jan 31 08:14:39 compute-0 loving_mayer[235295]: {}
Jan 31 08:14:39 compute-0 systemd[1]: libpod-ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96.scope: Deactivated successfully.
Jan 31 08:14:39 compute-0 systemd[1]: libpod-ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96.scope: Consumed 1.019s CPU time.
Jan 31 08:14:39 compute-0 podman[235249]: 2026-01-31 08:14:39.493743654 +0000 UTC m=+1.099980005 container died ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:14:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:39 compute-0 sudo[235692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuhdrcbwfrygnuvdldxmyjxrwyqxvywb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847279.4093084-1085-272075180570971/AnsiballZ_stat.py'
Jan 31 08:14:39 compute-0 sudo[235692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-db3ccf2afff8db09f83e1d687ae106cbc53a05455e52bc01e4cd0c524f7e949e-merged.mount: Deactivated successfully.
Jan 31 08:14:39 compute-0 python3.9[235694]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:14:39 compute-0 sudo[235692]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:40 compute-0 podman[235249]: 2026-01-31 08:14:40.004037807 +0000 UTC m=+1.610274128 container remove ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mayer, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:14:40 compute-0 sudo[234827]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:14:40 compute-0 systemd[1]: libpod-conmon-ae9042952220fe636682b004250bd3576bb4afdf57afe448f0278017fe3c1c96.scope: Deactivated successfully.
Jan 31 08:14:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:14:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:14:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:14:40 compute-0 sudo[235849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlgaowsiengvrxoakceozrccgsthqoqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847279.9453595-1093-124858088035694/AnsiballZ_stat.py'
Jan 31 08:14:40 compute-0 sudo[235849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:40 compute-0 sudo[235845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:14:40 compute-0 sudo[235845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:40 compute-0 sudo[235845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:40 compute-0 python3.9[235862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:40 compute-0 sudo[235849]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:40 compute-0 sudo[235993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onufadnqzfrgdswfpdmtjjxshmmrxuza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847279.9453595-1093-124858088035694/AnsiballZ_copy.py'
Jan 31 08:14:40 compute-0 sudo[235993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:40 compute-0 python3.9[235995]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769847279.9453595-1093-124858088035694/.source _original_basename=.n3w5aydn follow=False checksum=acb015d274b7d06e3c8514d5d2bc488efdc85fa6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 08:14:40 compute-0 sudo[235993]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:41 compute-0 ceph-mon[75294]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:41 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:14:41 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:14:41 compute-0 python3.9[236147]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:14:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:42 compute-0 python3.9[236299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:42 compute-0 python3.9[236420]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847281.7539656-1119-150484340977602/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:43 compute-0 python3.9[236570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:43 compute-0 ceph-mon[75294]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:43 compute-0 python3.9[236691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847282.7341595-1134-178739125098537/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:44 compute-0 sudo[236841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orvsrosqqkahoexmrwflliswaylrytvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847283.8644385-1151-279465650209157/AnsiballZ_container_config_data.py'
Jan 31 08:14:44 compute-0 sudo[236841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:44 compute-0 python3.9[236843]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 08:14:44 compute-0 sudo[236841]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:44 compute-0 ceph-mon[75294]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:45 compute-0 sudo[237002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgmigvwwxjblikcvaijvepszmkbbigcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847284.749534-1162-61254230371620/AnsiballZ_container_config_hash.py'
Jan 31 08:14:45 compute-0 sudo[237002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:45 compute-0 podman[236967]: 2026-01-31 08:14:45.185300061 +0000 UTC m=+0.071407570 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 08:14:45 compute-0 python3.9[237007]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:14:45 compute-0 sudo[237002]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:46 compute-0 sudo[237168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xugwkzzhgjvzdvsfwjktbxihyayzrorc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847285.6746228-1172-201779801946649/AnsiballZ_edpm_container_manage.py'
Jan 31 08:14:46 compute-0 sudo[237168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:46 compute-0 python3[237170]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:14:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:14:46.955 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:14:46.956 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:14:46.956 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:47 compute-0 ceph-mon[75294]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:49 compute-0 ceph-mon[75294]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:14:50
Jan 31 08:14:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:14:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:14:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'volumes']
Jan 31 08:14:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:14:51 compute-0 ceph-mon[75294]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:14:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:55 compute-0 podman[237225]: 2026-01-31 08:14:55.714530993 +0000 UTC m=+4.582669679 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:14:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:00 compute-0 sshd-session[237244]: Connection closed by 80.94.92.182 port 47336
Jan 31 08:15:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:04 compute-0 ceph-mon[75294]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:15:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:10 compute-0 ceph-mds[96942]: mds.beacon.cephfs.compute-0.xdvglw missed beacon ack from the monitors
Jan 31 08:15:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:16 compute-0 podman[237263]: 2026-01-31 08:15:16.191753939 +0000 UTC m=+0.065538638 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:15:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 ceph-mon[75294]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:15:20 compute-0 ceph-mon[75294]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:20 compute-0 ceph-mon[75294]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:20 compute-0 ceph-mon[75294]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:20 compute-0 ceph-mon[75294]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:15:21 compute-0 sshd-session[237290]: Invalid user solana from 193.32.162.145 port 32892
Jan 31 08:15:21 compute-0 sshd-session[237290]: Connection closed by invalid user solana 193.32.162.145 port 32892 [preauth]
Jan 31 08:15:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:15:23 compute-0 ceph-mon[75294]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:15:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:15:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:15:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:15:30 compute-0 ceph-mon[75294]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:15:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:15:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 08:15:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:34 compute-0 ceph-mon[75294]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:15:34 compute-0 ceph-mon[75294]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:15:34 compute-0 ceph-mon[75294]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:15:34 compute-0 ceph-mon[75294]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:15:34 compute-0 podman[237183]: 2026-01-31 08:15:34.705068079 +0000 UTC m=+48.157124073 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 08:15:34 compute-0 podman[237315]: 2026-01-31 08:15:34.791970688 +0000 UTC m=+0.018702059 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 08:15:35 compute-0 podman[237315]: 2026-01-31 08:15:35.006699048 +0000 UTC m=+0.233430369 container create d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 31 08:15:35 compute-0 python3[237170]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 08:15:35 compute-0 sudo[237168]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:35 compute-0 ceph-mon[75294]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:15:35 compute-0 ceph-mon[75294]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 08:15:35 compute-0 sudo[237503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeqvqplxkzlyothyqdhpywuxmzrkqovz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847335.3646662-1180-62025217519988/AnsiballZ_stat.py'
Jan 31 08:15:35 compute-0 sudo[237503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:15:35 compute-0 python3.9[237505]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:15:35 compute-0 sudo[237503]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:36 compute-0 sudo[237657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzkycrednhvbvowvnidbhyffvgfnxbur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847336.1670532-1192-12956382099203/AnsiballZ_container_config_data.py'
Jan 31 08:15:36 compute-0 sudo[237657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:36 compute-0 ceph-mon[75294]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:15:36 compute-0 python3.9[237659]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 08:15:36 compute-0 sudo[237657]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:37 compute-0 sudo[237809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pclrxpfoxrrjqnvjrdtfjvhzjefywmti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847336.8399656-1203-247654407521351/AnsiballZ_container_config_hash.py'
Jan 31 08:15:37 compute-0 sudo[237809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:37 compute-0 python3.9[237811]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:15:37 compute-0 sudo[237809]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:15:37 compute-0 sudo[237961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ualgoxmssrtsbkxphugohycozvlrpoon ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847337.4906254-1213-215923289604494/AnsiballZ_edpm_container_manage.py'
Jan 31 08:15:37 compute-0 sudo[237961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:38 compute-0 python3[237963]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:15:38 compute-0 podman[238001]: 2026-01-31 08:15:38.141344394 +0000 UTC m=+0.023497783 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 08:15:38 compute-0 podman[238001]: 2026-01-31 08:15:38.307252183 +0000 UTC m=+0.189405542 container create 234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=edpm, tcib_managed=true, container_name=nova_compute)
Jan 31 08:15:38 compute-0 python3[237963]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 08:15:38 compute-0 sudo[237961]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:38 compute-0 ceph-mon[75294]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:15:38 compute-0 sudo[238189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvsoytdzaaeuckofizarxzoxrtejwbyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847338.5837848-1221-143418425788881/AnsiballZ_stat.py'
Jan 31 08:15:38 compute-0 sudo[238189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:39 compute-0 python3.9[238191]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:15:39 compute-0 sudo[238189]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:39 compute-0 sudo[238343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdgeuhchrtyhypjnmzdxwmcjzgmkqlmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847339.3217332-1230-210326977073961/AnsiballZ_file.py'
Jan 31 08:15:39 compute-0 sudo[238343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:15:39 compute-0 python3.9[238345]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:39 compute-0 sudo[238343]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:40 compute-0 sudo[238494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgmaqcjxcttdifibcovgqvpuftqhgwyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847339.8051422-1230-33777597057734/AnsiballZ_copy.py'
Jan 31 08:15:40 compute-0 sudo[238494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:40 compute-0 sudo[238497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:40 compute-0 sudo[238497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:40 compute-0 sudo[238497]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:40 compute-0 python3.9[238496]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769847339.8051422-1230-33777597057734/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:40 compute-0 sudo[238522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:15:40 compute-0 sudo[238522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:40 compute-0 sudo[238494]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:40 compute-0 sudo[238634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxyaknjriaaktaqmripfrfjyhnikwjyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847339.8051422-1230-33777597057734/AnsiballZ_systemd.py'
Jan 31 08:15:40 compute-0 sudo[238634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:40 compute-0 sudo[238522]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:15:40 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:15:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:15:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:15:40 compute-0 python3.9[238636]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:15:40 compute-0 systemd[1]: Reloading.
Jan 31 08:15:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:15:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:15:40 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:15:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:15:40 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:15:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:15:40 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:40 compute-0 systemd-sysv-generator[238707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:15:40 compute-0 ceph-mon[75294]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:15:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:40 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:15:40 compute-0 systemd-rc-local-generator[238701]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:15:41 compute-0 sudo[238634]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:41 compute-0 sudo[238655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:41 compute-0 sudo[238655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:41 compute-0 sudo[238655]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:41 compute-0 sudo[238716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:15:41 compute-0 sudo[238716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:41 compute-0 podman[238713]: 2026-01-31 08:15:41.250040849 +0000 UTC m=+0.052046463 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:15:41 compute-0 sudo[238831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flfwanlqgcgaxfkndeaopjyemkaieroa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847339.8051422-1230-33777597057734/AnsiballZ_systemd.py'
Jan 31 08:15:41 compute-0 sudo[238831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:41 compute-0 podman[238846]: 2026-01-31 08:15:41.470667964 +0000 UTC m=+0.015704286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:41 compute-0 podman[238846]: 2026-01-31 08:15:41.62205445 +0000 UTC m=+0.167090752 container create 8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jepsen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:15:41 compute-0 python3.9[238833]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:15:41 compute-0 systemd[1]: Reloading.
Jan 31 08:15:41 compute-0 systemd-sysv-generator[238887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:15:41 compute-0 systemd-rc-local-generator[238880]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:15:41 compute-0 systemd[1]: Started libpod-conmon-8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0.scope.
Jan 31 08:15:42 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 08:15:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:15:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:15:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:15:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:42 compute-0 podman[238846]: 2026-01-31 08:15:42.107208786 +0000 UTC m=+0.652245088 container init 8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jepsen, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:15:42 compute-0 podman[238846]: 2026-01-31 08:15:42.114747044 +0000 UTC m=+0.659783346 container start 8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:42 compute-0 ecstatic_jepsen[238901]: 167 167
Jan 31 08:15:42 compute-0 systemd[1]: libpod-8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0.scope: Deactivated successfully.
Jan 31 08:15:42 compute-0 podman[238846]: 2026-01-31 08:15:42.329963439 +0000 UTC m=+0.874999771 container attach 8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jepsen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:42 compute-0 podman[238846]: 2026-01-31 08:15:42.331165013 +0000 UTC m=+0.876201315 container died 8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba8962b713c800c38c999e641e509afd572e6ba0fed6727f6aa778286da9dab9-merged.mount: Deactivated successfully.
Jan 31 08:15:42 compute-0 podman[238846]: 2026-01-31 08:15:42.675718212 +0000 UTC m=+1.220754514 container remove 8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:15:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:42 compute-0 podman[238938]: 2026-01-31 08:15:42.812877743 +0000 UTC m=+0.054875882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:42 compute-0 podman[238938]: 2026-01-31 08:15:42.958084407 +0000 UTC m=+0.200082526 container create 42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:15:43 compute-0 systemd[1]: Started libpod-conmon-42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624.scope.
Jan 31 08:15:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:43 compute-0 systemd[1]: libpod-conmon-8bda387a51e63e4611f049401517933ada6af27769f1b66abbb58a024841f0c0.scope: Deactivated successfully.
Jan 31 08:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841f4d1495dcce719ff64a824755e3b07771389329105c34d92b0991ac38712c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841f4d1495dcce719ff64a824755e3b07771389329105c34d92b0991ac38712c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841f4d1495dcce719ff64a824755e3b07771389329105c34d92b0991ac38712c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841f4d1495dcce719ff64a824755e3b07771389329105c34d92b0991ac38712c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841f4d1495dcce719ff64a824755e3b07771389329105c34d92b0991ac38712c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:43 compute-0 podman[238903]: 2026-01-31 08:15:43.153388331 +0000 UTC m=+1.140760508 container init 234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:15:43 compute-0 podman[238903]: 2026-01-31 08:15:43.159280344 +0000 UTC m=+1.146652501 container start 234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:15:43 compute-0 nova_compute[238954]: + sudo -E kolla_set_configs
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Validating config file
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying service configuration files
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:15:43 compute-0 podman[238903]: nova_compute
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Deleting /etc/ceph
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Creating directory /etc/ceph
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 08:15:43 compute-0 systemd[1]: Started nova_compute container.
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Writing out command to execute
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:43 compute-0 nova_compute[238954]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:15:43 compute-0 nova_compute[238954]: ++ cat /run_command
Jan 31 08:15:43 compute-0 nova_compute[238954]: + CMD=nova-compute
Jan 31 08:15:43 compute-0 nova_compute[238954]: + ARGS=
Jan 31 08:15:43 compute-0 nova_compute[238954]: + sudo kolla_copy_cacerts
Jan 31 08:15:43 compute-0 nova_compute[238954]: + [[ ! -n '' ]]
Jan 31 08:15:43 compute-0 nova_compute[238954]: + . kolla_extend_start
Jan 31 08:15:43 compute-0 nova_compute[238954]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 08:15:43 compute-0 nova_compute[238954]: Running command: 'nova-compute'
Jan 31 08:15:43 compute-0 nova_compute[238954]: + umask 0022
Jan 31 08:15:43 compute-0 nova_compute[238954]: + exec nova-compute
Jan 31 08:15:43 compute-0 sudo[238831]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:43 compute-0 podman[238938]: 2026-01-31 08:15:43.340444804 +0000 UTC m=+0.582442943 container init 42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:15:43 compute-0 podman[238938]: 2026-01-31 08:15:43.347163601 +0000 UTC m=+0.589161720 container start 42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:43 compute-0 ceph-mon[75294]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:15:43 compute-0 podman[238938]: 2026-01-31 08:15:43.507598247 +0000 UTC m=+0.749596366 container attach 42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:15:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:15:43 compute-0 vigorous_proskuriakova[238960]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:15:43 compute-0 vigorous_proskuriakova[238960]: --> All data devices are unavailable
Jan 31 08:15:43 compute-0 systemd[1]: libpod-42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624.scope: Deactivated successfully.
Jan 31 08:15:43 compute-0 podman[238938]: 2026-01-31 08:15:43.810870222 +0000 UTC m=+1.052868351 container died 42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:44 compute-0 python3.9[239149]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:15:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-841f4d1495dcce719ff64a824755e3b07771389329105c34d92b0991ac38712c-merged.mount: Deactivated successfully.
Jan 31 08:15:44 compute-0 python3.9[239300]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:15:45 compute-0 ceph-mon[75294]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:15:45 compute-0 python3.9[239451]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:15:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s
Jan 31 08:15:45 compute-0 podman[238938]: 2026-01-31 08:15:45.700543483 +0000 UTC m=+2.942541622 container remove 42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:15:45 compute-0 sudo[238716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:45 compute-0 sudo[239476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:45 compute-0 sudo[239476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:45 compute-0 sudo[239476]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:45 compute-0 systemd[1]: libpod-conmon-42b3ac5456f0eeaf0398936f0f23ba61880aea33b09c7ca6e59f117392582624.scope: Deactivated successfully.
Jan 31 08:15:45 compute-0 sudo[239505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:15:45 compute-0 sudo[239505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:46 compute-0 podman[239590]: 2026-01-31 08:15:46.033541972 +0000 UTC m=+0.019081030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:46 compute-0 podman[239590]: 2026-01-31 08:15:46.166416125 +0000 UTC m=+0.151955183 container create 1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:15:46 compute-0 sudo[239677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrnxgztjmsvbocmxbxmtxevzptoeqsia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847345.7983396-1290-147202591289170/AnsiballZ_podman_container.py'
Jan 31 08:15:46 compute-0 sudo[239677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:46 compute-0 systemd[1]: Started libpod-conmon-1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da.scope.
Jan 31 08:15:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:46 compute-0 podman[239590]: 2026-01-31 08:15:46.409767419 +0000 UTC m=+0.395306507 container init 1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:15:46 compute-0 podman[239590]: 2026-01-31 08:15:46.417669418 +0000 UTC m=+0.403208486 container start 1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:15:46 compute-0 intelligent_kare[239688]: 167 167
Jan 31 08:15:46 compute-0 systemd[1]: libpod-1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da.scope: Deactivated successfully.
Jan 31 08:15:46 compute-0 python3.9[239680]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 08:15:46 compute-0 podman[239590]: 2026-01-31 08:15:46.581918451 +0000 UTC m=+0.567457509 container attach 1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kare, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:46 compute-0 podman[239590]: 2026-01-31 08:15:46.58263846 +0000 UTC m=+0.568177518 container died 1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:15:46 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:15:46 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:15:46 compute-0 ceph-mon[75294]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s
Jan 31 08:15:46 compute-0 podman[239679]: 2026-01-31 08:15:46.851896702 +0000 UTC m=+0.604980028 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:15:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:15:46.956 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:15:46.957 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:15:46.957 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1901a41d3a9c273b2b1b5ae86b6ef50f9eca8f26303bc786f58e1289d6325f3d-merged.mount: Deactivated successfully.
Jan 31 08:15:47 compute-0 podman[239590]: 2026-01-31 08:15:47.549946708 +0000 UTC m=+1.535485766 container remove 1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:47 compute-0 nova_compute[238954]: 2026-01-31 08:15:47.571 238964 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:15:47 compute-0 nova_compute[238954]: 2026-01-31 08:15:47.572 238964 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:15:47 compute-0 nova_compute[238954]: 2026-01-31 08:15:47.573 238964 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:15:47 compute-0 nova_compute[238954]: 2026-01-31 08:15:47.573 238964 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 08:15:47 compute-0 systemd[1]: libpod-conmon-1c7a0b738bd7089d2ef8a4ed13cf1d09f72f0bdb35cd2f975977e0c1c8b567da.scope: Deactivated successfully.
Jan 31 08:15:47 compute-0 sudo[239677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s
Jan 31 08:15:47 compute-0 podman[239758]: 2026-01-31 08:15:47.652331786 +0000 UTC m=+0.019393369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:47 compute-0 nova_compute[238954]: 2026-01-31 08:15:47.865 238964 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:47 compute-0 nova_compute[238954]: 2026-01-31 08:15:47.923 238964 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:47 compute-0 nova_compute[238954]: 2026-01-31 08:15:47.924 238964 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:15:48 compute-0 podman[239758]: 2026-01-31 08:15:48.028833591 +0000 UTC m=+0.395895144 container create f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:15:48 compute-0 sudo[239921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebctihohyqcgllwxogxrraztbccpcdhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847347.772562-1298-61456321081909/AnsiballZ_systemd.py'
Jan 31 08:15:48 compute-0 sudo[239921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:48 compute-0 systemd[1]: Started libpod-conmon-f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091.scope.
Jan 31 08:15:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693e5290d04f8db79bfb43483d570641d64b9c977364647ce862b45d87142e08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693e5290d04f8db79bfb43483d570641d64b9c977364647ce862b45d87142e08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693e5290d04f8db79bfb43483d570641d64b9c977364647ce862b45d87142e08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693e5290d04f8db79bfb43483d570641d64b9c977364647ce862b45d87142e08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:48 compute-0 python3.9[239923]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:15:48 compute-0 systemd[1]: Stopping nova_compute container...
Jan 31 08:15:48 compute-0 podman[239758]: 2026-01-31 08:15:48.441193908 +0000 UTC m=+0.808255561 container init f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:15:48 compute-0 podman[239758]: 2026-01-31 08:15:48.451133115 +0000 UTC m=+0.818194688 container start f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:48 compute-0 podman[239758]: 2026-01-31 08:15:48.502567959 +0000 UTC m=+0.869629592 container attach f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:15:48 compute-0 systemd[1]: libpod-234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c.scope: Deactivated successfully.
Jan 31 08:15:48 compute-0 systemd[1]: libpod-234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c.scope: Consumed 2.406s CPU time.
Jan 31 08:15:48 compute-0 podman[239935]: 2026-01-31 08:15:48.5935296 +0000 UTC m=+0.162978467 container died 234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:15:48 compute-0 jolly_villani[239929]: {
Jan 31 08:15:48 compute-0 jolly_villani[239929]:     "0": [
Jan 31 08:15:48 compute-0 jolly_villani[239929]:         {
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "devices": [
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "/dev/loop3"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             ],
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_name": "ceph_lv0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_size": "21470642176",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "name": "ceph_lv0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "tags": {
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cluster_name": "ceph",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.crush_device_class": "",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.encrypted": "0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.objectstore": "bluestore",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osd_id": "0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.type": "block",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.vdo": "0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.with_tpm": "0"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             },
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "type": "block",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "vg_name": "ceph_vg0"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:         }
Jan 31 08:15:48 compute-0 jolly_villani[239929]:     ],
Jan 31 08:15:48 compute-0 jolly_villani[239929]:     "1": [
Jan 31 08:15:48 compute-0 jolly_villani[239929]:         {
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "devices": [
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "/dev/loop4"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             ],
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_name": "ceph_lv1",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_size": "21470642176",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "name": "ceph_lv1",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "tags": {
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cluster_name": "ceph",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.crush_device_class": "",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.encrypted": "0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.objectstore": "bluestore",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osd_id": "1",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.type": "block",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.vdo": "0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.with_tpm": "0"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             },
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "type": "block",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "vg_name": "ceph_vg1"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:         }
Jan 31 08:15:48 compute-0 jolly_villani[239929]:     ],
Jan 31 08:15:48 compute-0 jolly_villani[239929]:     "2": [
Jan 31 08:15:48 compute-0 jolly_villani[239929]:         {
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "devices": [
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "/dev/loop5"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             ],
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_name": "ceph_lv2",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_size": "21470642176",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "name": "ceph_lv2",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "tags": {
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.cluster_name": "ceph",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.crush_device_class": "",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.encrypted": "0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.objectstore": "bluestore",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osd_id": "2",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.type": "block",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.vdo": "0",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:                 "ceph.with_tpm": "0"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             },
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "type": "block",
Jan 31 08:15:48 compute-0 jolly_villani[239929]:             "vg_name": "ceph_vg2"
Jan 31 08:15:48 compute-0 jolly_villani[239929]:         }
Jan 31 08:15:48 compute-0 jolly_villani[239929]:     ]
Jan 31 08:15:48 compute-0 jolly_villani[239929]: }
Jan 31 08:15:48 compute-0 systemd[1]: libpod-f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091.scope: Deactivated successfully.
Jan 31 08:15:48 compute-0 podman[239758]: 2026-01-31 08:15:48.73893722 +0000 UTC m=+1.105998773 container died f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c-userdata-shm.mount: Deactivated successfully.
Jan 31 08:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4-merged.mount: Deactivated successfully.
Jan 31 08:15:49 compute-0 ceph-mon[75294]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s
Jan 31 08:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-693e5290d04f8db79bfb43483d570641d64b9c977364647ce862b45d87142e08-merged.mount: Deactivated successfully.
Jan 31 08:15:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:49 compute-0 podman[239758]: 2026-01-31 08:15:49.589547375 +0000 UTC m=+1.956608928 container remove f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:15:49 compute-0 systemd[1]: libpod-conmon-f19e64db886984874162973b87a4b981363adb0ce68ba3042ef733939b669091.scope: Deactivated successfully.
Jan 31 08:15:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:15:49 compute-0 sudo[239505]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:49 compute-0 sudo[239981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:49 compute-0 sudo[239981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:49 compute-0 sudo[239981]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:49 compute-0 podman[239935]: 2026-01-31 08:15:49.744753006 +0000 UTC m=+1.314201803 container cleanup 234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 31 08:15:49 compute-0 podman[239935]: nova_compute
Jan 31 08:15:49 compute-0 sudo[240006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:15:49 compute-0 sudo[240006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:49 compute-0 podman[240030]: nova_compute
Jan 31 08:15:49 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 08:15:49 compute-0 systemd[1]: Stopped nova_compute container.
Jan 31 08:15:49 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 08:15:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c316a6ca4fdc23d51cc752cec67879f8e0fbd70ed0c64bc50c25a54eaecfbe4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:50 compute-0 podman[240044]: 2026-01-31 08:15:50.068403696 +0000 UTC m=+0.252626513 container init 234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:50 compute-0 podman[240044]: 2026-01-31 08:15:50.076531001 +0000 UTC m=+0.260753788 container start 234f7c77cd60cf7fb48e54c02241de807d40ddb3d3ec71bf04372b186767710c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 31 08:15:50 compute-0 nova_compute[240062]: + sudo -E kolla_set_configs
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Validating config file
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying service configuration files
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /etc/ceph
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Creating directory /etc/ceph
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 08:15:50 compute-0 podman[240044]: nova_compute
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Writing out command to execute
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:50 compute-0 nova_compute[240062]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:15:50 compute-0 systemd[1]: Started nova_compute container.
Jan 31 08:15:50 compute-0 nova_compute[240062]: ++ cat /run_command
Jan 31 08:15:50 compute-0 nova_compute[240062]: + CMD=nova-compute
Jan 31 08:15:50 compute-0 nova_compute[240062]: + ARGS=
Jan 31 08:15:50 compute-0 nova_compute[240062]: + sudo kolla_copy_cacerts
Jan 31 08:15:50 compute-0 nova_compute[240062]: + [[ ! -n '' ]]
Jan 31 08:15:50 compute-0 nova_compute[240062]: + . kolla_extend_start
Jan 31 08:15:50 compute-0 nova_compute[240062]: Running command: 'nova-compute'
Jan 31 08:15:50 compute-0 nova_compute[240062]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 08:15:50 compute-0 nova_compute[240062]: + umask 0022
Jan 31 08:15:50 compute-0 nova_compute[240062]: + exec nova-compute
Jan 31 08:15:50 compute-0 sudo[239921]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:50 compute-0 podman[240076]: 2026-01-31 08:15:50.169091337 +0000 UTC m=+0.155855211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:50 compute-0 podman[240076]: 2026-01-31 08:15:50.388982541 +0000 UTC m=+0.375746345 container create db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:15:50 compute-0 systemd[1]: Started libpod-conmon-db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd.scope.
Jan 31 08:15:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:50 compute-0 sudo[240253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgborgdodsslvkjoghnswgvotjdjmcoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847350.360527-1307-80166695775123/AnsiballZ_podman_container.py'
Jan 31 08:15:50 compute-0 sudo[240253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:50 compute-0 podman[240076]: 2026-01-31 08:15:50.73099841 +0000 UTC m=+0.717762284 container init db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_moser, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:50 compute-0 podman[240076]: 2026-01-31 08:15:50.736689937 +0000 UTC m=+0.723453761 container start db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:15:50 compute-0 systemd[1]: libpod-db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd.scope: Deactivated successfully.
Jan 31 08:15:50 compute-0 elated_moser[240224]: 167 167
Jan 31 08:15:50 compute-0 conmon[240224]: conmon db9d12c303f5d597f400 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd.scope/container/memory.events
Jan 31 08:15:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:15:50
Jan 31 08:15:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:15:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:15:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Jan 31 08:15:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:15:50 compute-0 podman[240076]: 2026-01-31 08:15:50.879416373 +0000 UTC m=+0.866180187 container attach db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:15:50 compute-0 podman[240076]: 2026-01-31 08:15:50.881082919 +0000 UTC m=+0.867846753 container died db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_moser, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:15:50 compute-0 python3.9[240255]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 08:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec7c82c1b133164885f2432664d58c6563d270180ca89bddec2c90f5e8e992e4-merged.mount: Deactivated successfully.
Jan 31 08:15:51 compute-0 ceph-mon[75294]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:15:51 compute-0 podman[240076]: 2026-01-31 08:15:51.493684907 +0000 UTC m=+1.480448701 container remove db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:15:51 compute-0 systemd[1]: libpod-conmon-db9d12c303f5d597f4005cc3ce007f4184430ca7f90f6213be4a99391e6ff1bd.scope: Deactivated successfully.
Jan 31 08:15:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:51 compute-0 podman[240301]: 2026-01-31 08:15:51.689634858 +0000 UTC m=+0.109232428 container create c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_babbage, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:15:51 compute-0 podman[240301]: 2026-01-31 08:15:51.598137202 +0000 UTC m=+0.017734812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:51 compute-0 systemd[1]: Started libpod-conmon-d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f.scope.
Jan 31 08:15:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7151821b953c26c2b40db9b9c4fbe2ff055674ffc3e5b00b32a67c08fca76/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7151821b953c26c2b40db9b9c4fbe2ff055674ffc3e5b00b32a67c08fca76/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7151821b953c26c2b40db9b9c4fbe2ff055674ffc3e5b00b32a67c08fca76/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:51 compute-0 systemd[1]: Started libpod-conmon-c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf.scope.
Jan 31 08:15:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d52cea3f61d25d4a32aa476286f31894e605541e9ca62b76ef31578cc6481e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d52cea3f61d25d4a32aa476286f31894e605541e9ca62b76ef31578cc6481e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d52cea3f61d25d4a32aa476286f31894e605541e9ca62b76ef31578cc6481e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d52cea3f61d25d4a32aa476286f31894e605541e9ca62b76ef31578cc6481e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:51 compute-0 podman[240303]: 2026-01-31 08:15:51.959100936 +0000 UTC m=+0.375256051 container init d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:15:51 compute-0 podman[240303]: 2026-01-31 08:15:51.963768335 +0000 UTC m=+0.379923430 container start d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Jan 31 08:15:51 compute-0 nova_compute[240062]: 2026-01-31 08:15:51.984 240090 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:15:51 compute-0 nova_compute[240062]: 2026-01-31 08:15:51.986 240090 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:15:51 compute-0 nova_compute[240062]: 2026-01-31 08:15:51.986 240090 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:15:51 compute-0 nova_compute[240062]: 2026-01-31 08:15:51.986 240090 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 08:15:52 compute-0 python3.9[240255]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 08:15:52 compute-0 nova_compute_init[240345]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 08:15:52 compute-0 systemd[1]: libpod-d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f.scope: Deactivated successfully.
Jan 31 08:15:52 compute-0 podman[240301]: 2026-01-31 08:15:52.072870558 +0000 UTC m=+0.492468138 container init c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:15:52 compute-0 podman[240301]: 2026-01-31 08:15:52.07870929 +0000 UTC m=+0.498306870 container start c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:52 compute-0 podman[240301]: 2026-01-31 08:15:52.10033897 +0000 UTC m=+0.519936550 container attach c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:52 compute-0 podman[240356]: 2026-01-31 08:15:52.101724288 +0000 UTC m=+0.072398897 container died d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.133 240090 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.144 240090 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.145 240090 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f-userdata-shm.mount: Deactivated successfully.
Jan 31 08:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d7151821b953c26c2b40db9b9c4fbe2ff055674ffc3e5b00b32a67c08fca76-merged.mount: Deactivated successfully.
Jan 31 08:15:52 compute-0 podman[240356]: 2026-01-31 08:15:52.496425558 +0000 UTC m=+0.467100167 container cleanup d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Jan 31 08:15:52 compute-0 systemd[1]: libpod-conmon-d7035dc8ca808a03395d4a0bb447ebaa662bd8245a13d12e2cf783c79e0f7f3f.scope: Deactivated successfully.
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.567 240090 INFO nova.virt.driver [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 08:15:52 compute-0 sudo[240253]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:52 compute-0 ceph-mon[75294]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:52 compute-0 lvm[240485]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:15:52 compute-0 lvm[240485]: VG ceph_vg0 finished
Jan 31 08:15:52 compute-0 lvm[240488]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:15:52 compute-0 lvm[240488]: VG ceph_vg1 finished
Jan 31 08:15:52 compute-0 lvm[240490]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:15:52 compute-0 lvm[240490]: VG ceph_vg2 finished
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.837 240090 INFO nova.compute.provider_config [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.855 240090 DEBUG oslo_concurrency.lockutils [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.855 240090 DEBUG oslo_concurrency.lockutils [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.856 240090 DEBUG oslo_concurrency.lockutils [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.856 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.857 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.857 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.857 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.858 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.858 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.858 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.859 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.859 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.859 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.860 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.860 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.860 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.861 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.861 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.861 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.862 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.862 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.862 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.863 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.863 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.863 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.864 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.864 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.864 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.865 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.865 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.865 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.866 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.866 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 interesting_babbage[240338]: {}
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.867 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.868 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.868 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.868 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.868 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.869 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.869 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.869 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.869 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.869 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.869 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.870 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.870 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.870 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.870 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.870 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.870 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.871 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.871 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.871 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.871 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.871 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.871 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.872 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.872 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.872 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.872 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.872 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.872 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.873 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.873 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.873 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.873 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.873 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.873 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.873 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.874 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.874 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.874 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.874 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.874 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.874 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.875 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.875 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.875 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.875 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.875 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.875 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.876 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.876 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.876 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.876 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.876 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.877 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.877 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.877 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.877 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.877 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.877 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.878 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.878 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.878 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.878 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.878 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.878 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.879 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.879 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.879 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.879 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.879 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.879 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.880 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.880 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.880 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.880 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.880 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.880 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.881 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.881 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.881 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.881 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.881 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.881 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.881 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.882 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.882 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.882 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.882 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.882 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.882 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.882 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.883 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.883 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.883 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.883 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.883 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.883 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.883 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.884 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.884 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.884 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.884 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.884 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.884 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.885 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.885 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.885 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.885 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.885 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.885 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.885 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.886 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.886 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.886 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.886 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.886 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.886 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.886 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.887 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.887 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.887 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.887 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.887 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.888 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.888 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.888 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.888 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.888 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.888 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.889 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.889 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.889 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.889 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.889 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.889 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.890 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.890 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.890 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.890 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.890 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.891 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.891 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.891 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.891 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.891 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.891 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.892 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 systemd[1]: libpod-c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf.scope: Deactivated successfully.
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.892 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.892 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.892 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.892 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.892 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.893 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.893 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.893 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.893 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 conmon[240338]: conmon c5d0249d0dbc05908a64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf.scope/container/memory.events
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.893 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.893 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.894 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.894 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.894 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.894 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.894 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.894 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.894 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.895 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.895 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 podman[240301]: 2026-01-31 08:15:52.89452753 +0000 UTC m=+1.314125140 container died c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_babbage, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.895 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.895 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.895 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.895 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.896 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.896 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.896 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.896 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.896 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.896 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.897 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.897 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.897 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.897 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.897 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.897 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.897 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.898 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.898 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.898 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.898 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.898 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.898 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.898 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.899 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.899 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.899 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.899 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.899 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.899 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.899 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.900 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.900 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.900 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.900 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.900 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.900 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.901 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.901 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.901 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.901 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.901 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.901 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.901 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.902 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.902 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.902 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.902 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.902 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.902 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.903 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.903 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.903 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.903 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.903 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.904 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.904 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.904 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.904 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.904 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.904 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.905 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.905 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.905 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.905 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.905 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.905 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.906 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.906 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.906 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.906 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.906 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.906 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.907 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.907 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.907 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.907 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.907 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.907 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.908 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.908 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.908 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.908 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.908 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.909 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.909 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.909 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.909 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.909 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.909 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.909 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.910 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.910 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.910 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.910 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.910 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.910 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.911 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.911 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.911 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.911 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.911 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.911 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.911 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.912 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.912 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.912 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.912 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.912 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.912 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.912 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.913 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.913 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.913 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.913 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.913 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.913 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.914 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.914 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.914 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.914 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.914 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.914 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.914 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.915 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.915 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.915 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.915 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.915 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.915 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.916 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.916 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.916 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.916 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.916 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.916 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.917 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.917 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.917 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.917 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.917 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.917 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.918 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.918 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.918 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.918 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.918 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.918 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.918 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.919 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.919 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.919 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.919 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.919 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.919 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.919 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.920 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.920 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.920 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.920 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.920 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.921 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.921 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.921 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.921 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.921 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.921 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.922 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.922 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.922 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.922 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.922 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.922 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.923 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.923 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.923 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.923 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.923 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.924 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.924 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.924 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.924 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.924 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.924 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.925 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.925 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.925 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.925 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.925 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.926 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.926 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.926 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.926 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.926 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.926 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.927 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.927 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.927 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.927 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.927 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.928 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.928 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.928 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.928 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.928 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.928 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.929 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.929 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.929 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.929 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.929 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.929 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.930 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.930 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.930 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.930 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.930 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.930 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.931 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.931 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.931 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.931 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.931 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.931 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.931 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.932 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.933 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.933 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.933 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.933 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.933 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.933 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.934 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.934 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.934 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.934 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.934 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.934 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.934 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.935 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.935 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.935 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.935 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.935 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.935 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.935 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.936 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.936 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.936 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.936 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.937 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.937 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.937 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.937 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.938 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.938 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.938 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.938 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.938 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.938 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.939 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.939 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.939 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.939 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.939 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.939 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.940 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.940 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.940 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.940 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.940 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.941 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.941 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.941 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.941 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.941 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.941 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.942 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.942 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.942 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.942 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.942 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.942 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.943 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.943 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.943 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.943 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.943 240090 WARNING oslo_config.cfg [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 08:15:52 compute-0 nova_compute[240062]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 08:15:52 compute-0 nova_compute[240062]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 08:15:52 compute-0 nova_compute[240062]: and ``live_migration_inbound_addr`` respectively.
Jan 31 08:15:52 compute-0 nova_compute[240062]: ).  Its value may be silently ignored in the future.
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.944 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.944 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.944 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.944 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.944 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.944 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.945 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.945 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.945 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.945 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.945 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.945 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.946 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.946 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.946 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.946 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.946 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.946 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.947 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rbd_secret_uuid        = dc03f344-536f-5591-add9-31059f42637c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.947 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.947 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.947 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.947 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.947 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.948 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.948 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.948 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.948 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.948 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.948 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.949 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.949 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.949 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.949 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.949 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.949 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.950 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.950 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.950 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.950 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.950 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.950 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.951 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.951 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.951 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.951 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.951 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.951 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.951 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.952 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.952 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.952 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.952 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.952 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.952 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.953 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.953 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.953 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.953 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.953 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.953 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.953 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.954 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.954 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.954 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.954 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.954 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.954 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.954 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.955 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.955 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.955 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.955 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.955 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.956 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.956 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.956 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.956 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.956 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.956 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.956 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.957 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.957 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.957 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.957 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.957 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.957 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.958 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.958 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.958 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.958 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.958 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.958 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.958 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.959 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.959 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.959 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.959 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.959 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.959 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.959 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.960 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.960 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.960 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.960 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.960 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.960 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.960 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.961 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.961 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.961 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.961 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.961 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.961 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.961 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.962 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.962 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.962 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.962 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.962 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.962 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.962 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.963 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.963 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.963 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.963 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.963 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.963 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.964 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.964 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.964 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.964 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.964 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.964 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.964 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.965 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.965 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.965 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.965 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.965 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.966 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.966 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.966 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.966 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.966 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.966 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.966 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.967 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.967 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.967 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.967 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.967 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.967 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.967 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.968 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.968 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.968 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.968 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.968 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.968 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.968 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.969 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.969 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.969 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.969 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.969 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.969 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.969 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.970 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.970 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.970 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.970 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.970 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.970 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.970 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.971 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.971 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.971 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.971 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.971 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.971 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.971 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.972 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.972 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.972 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.972 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.972 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.972 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.972 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.973 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.973 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.973 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.973 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.973 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.973 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.973 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.974 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.974 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.974 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.974 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.974 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.974 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.975 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.976 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.976 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.976 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.976 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.976 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.976 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.976 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.977 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.977 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.977 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.977 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.977 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.977 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.977 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.978 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.979 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.979 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.979 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.979 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.979 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.979 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.979 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.980 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.980 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.980 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.980 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.980 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.980 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.980 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.981 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.981 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.981 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.981 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.981 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.981 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.982 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.982 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.982 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.982 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.982 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.982 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.983 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.984 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.984 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.984 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.984 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.984 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.984 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.984 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.985 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.985 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.985 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.985 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.985 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.985 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.985 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.986 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.986 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.986 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.986 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.986 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.986 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.986 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.987 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.987 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.987 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.987 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.987 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.987 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.987 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.988 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.988 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.988 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.988 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.988 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.988 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.988 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.989 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.989 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.989 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.989 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.989 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.989 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.989 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.990 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.991 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.991 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.991 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.991 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.991 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.991 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.991 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.992 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.992 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.992 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.992 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.992 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.992 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.992 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.993 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.993 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.993 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.993 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.993 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.993 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.993 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.994 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.994 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.994 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.994 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.994 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.994 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.994 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.995 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.995 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.995 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.995 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.995 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.995 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.995 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.996 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.997 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.998 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.998 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.998 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.998 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.998 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.998 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.998 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:52 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:52.999 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.000 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.000 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.000 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.000 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.000 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.000 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.000 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.001 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.002 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.002 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.002 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.002 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.002 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.002 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.002 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.003 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.003 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 sshd-session[215300]: Connection closed by 192.168.122.30 port 35682
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.003 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.003 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.003 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.003 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.003 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.004 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.005 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.005 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.005 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.005 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.005 240090 DEBUG oslo_service.service [None req-0eb475cb-d82e-4c6a-924a-6e949cce3b78 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.006 240090 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 08:15:53 compute-0 sshd-session[215297]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:15:53 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 08:15:53 compute-0 systemd[1]: session-50.scope: Consumed 1min 43.671s CPU time.
Jan 31 08:15:53 compute-0 systemd-logind[810]: Session 50 logged out. Waiting for processes to exit.
Jan 31 08:15:53 compute-0 systemd-logind[810]: Removed session 50.
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.071 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.072 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.072 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.072 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 08:15:53 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 08:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8d52cea3f61d25d4a32aa476286f31894e605541e9ca62b76ef31578cc6481e-merged.mount: Deactivated successfully.
Jan 31 08:15:53 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.122 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f47da996d60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.123 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f47da996d60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.124 240090 INFO nova.virt.libvirt.driver [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Connection event '1' reason 'None'
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.140 240090 WARNING nova.virt.libvirt.driver [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 08:15:53 compute-0 nova_compute[240062]: 2026-01-31 08:15:53.140 240090 DEBUG nova.virt.libvirt.volume.mount [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 08:15:53 compute-0 podman[240301]: 2026-01-31 08:15:53.273982306 +0000 UTC m=+1.693579886 container remove c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:15:53 compute-0 sudo[240006]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:15:53 compute-0 systemd[1]: libpod-conmon-c5d0249d0dbc05908a6430da3ac061fa71b85f1f60f026a0bbfc959665ef70bf.scope: Deactivated successfully.
Jan 31 08:15:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:15:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:15:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:15:53 compute-0 sudo[240556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:15:53 compute-0 sudo[240556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:53 compute-0 sudo[240556]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.115 240090 INFO nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]: 
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <host>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <uuid>9a8690d7-9804-4b35-b7c9-6b26f70c3d7e</uuid>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <arch>x86_64</arch>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model>EPYC-Rome-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <vendor>AMD</vendor>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <microcode version='16777317'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <signature family='23' model='49' stepping='0'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='x2apic'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='tsc-deadline'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='osxsave'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='hypervisor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='tsc_adjust'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='spec-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='stibp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='arch-capabilities'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='cmp_legacy'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='topoext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='virt-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='lbrv'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='tsc-scale'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='vmcb-clean'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='pause-filter'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='pfthreshold'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='svme-addr-chk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='rdctl-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='mds-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature name='pschange-mc-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <pages unit='KiB' size='4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <pages unit='KiB' size='2048'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <pages unit='KiB' size='1048576'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <power_management>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <suspend_mem/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </power_management>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <iommu support='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <migration_features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <live/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <uri_transports>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <uri_transport>tcp</uri_transport>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <uri_transport>rdma</uri_transport>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </uri_transports>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </migration_features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <topology>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <cells num='1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <cell id='0'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           <memory unit='KiB'>7864296</memory>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           <pages unit='KiB' size='4'>1966074</pages>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           <distances>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <sibling id='0' value='10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           </distances>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           <cpus num='8'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:           </cpus>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         </cell>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </cells>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </topology>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <cache>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </cache>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <secmodel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model>selinux</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <doi>0</doi>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </secmodel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <secmodel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model>dac</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <doi>0</doi>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </secmodel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </host>
Jan 31 08:15:54 compute-0 nova_compute[240062]: 
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <guest>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <os_type>hvm</os_type>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <arch name='i686'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <wordsize>32</wordsize>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <domain type='qemu'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <domain type='kvm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </arch>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <pae/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <nonpae/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <acpi default='on' toggle='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <apic default='on' toggle='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <cpuselection/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <deviceboot/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <disksnapshot default='on' toggle='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <externalSnapshot/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </guest>
Jan 31 08:15:54 compute-0 nova_compute[240062]: 
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <guest>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <os_type>hvm</os_type>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <arch name='x86_64'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <wordsize>64</wordsize>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <domain type='qemu'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <domain type='kvm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </arch>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <acpi default='on' toggle='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <apic default='on' toggle='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <cpuselection/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <deviceboot/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <disksnapshot default='on' toggle='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <externalSnapshot/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </guest>
Jan 31 08:15:54 compute-0 nova_compute[240062]: 
Jan 31 08:15:54 compute-0 nova_compute[240062]: </capabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]: 
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.121 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.146 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 08:15:54 compute-0 nova_compute[240062]: <domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <domain>kvm</domain>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <arch>i686</arch>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <vcpu max='240'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <iothreads supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <os supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='firmware'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <loader supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>rom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pflash</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='readonly'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>yes</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='secure'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </loader>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </os>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='maximum' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='maximumMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-model' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <vendor>AMD</vendor>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='x2apic'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='stibp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='succor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lbrv'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='custom' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Dhyana-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <memoryBacking supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='sourceType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>anonymous</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>memfd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </memoryBacking>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <disk supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='diskDevice'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>disk</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cdrom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>floppy</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>lun</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ide</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>fdc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>sata</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </disk>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <graphics supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vnc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egl-headless</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </graphics>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <video supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='modelType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vga</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cirrus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>none</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>bochs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ramfb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </video>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hostdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='mode'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>subsystem</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='startupPolicy'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>mandatory</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>requisite</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>optional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='subsysType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pci</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='capsType'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='pciBackend'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hostdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <rng supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>random</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </rng>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <filesystem supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='driverType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>path</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>handle</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtiofs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </filesystem>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tpm supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-tis</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-crb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emulator</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>external</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendVersion'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>2.0</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </tpm>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <redirdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </redirdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <channel supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </channel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <crypto supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </crypto>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <interface supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>passt</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </interface>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <panic supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>isa</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>hyperv</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </panic>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <console supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>null</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dev</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pipe</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stdio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>udp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tcp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu-vdagent</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </console>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <gic supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <vmcoreinfo supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <genid supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backingStoreInput supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backup supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <async-teardown supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <s390-pv supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <ps2 supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tdx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sev supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sgx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hyperv supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='features'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>relaxed</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vapic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>spinlocks</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vpindex</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>runtime</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>synic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stimer</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reset</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vendor_id</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>frequencies</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reenlightenment</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tlbflush</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ipi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>avic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emsr_bitmap</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>xmm_input</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <spinlocks>4095</spinlocks>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <stimer_direct>on</stimer_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hyperv>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <launchSecurity supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </features>
Jan 31 08:15:54 compute-0 nova_compute[240062]: </domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.157 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 08:15:54 compute-0 nova_compute[240062]: <domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <domain>kvm</domain>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <arch>i686</arch>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <vcpu max='4096'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <iothreads supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <os supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='firmware'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <loader supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>rom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pflash</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='readonly'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>yes</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='secure'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </loader>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </os>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='maximum' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='maximumMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-model' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <vendor>AMD</vendor>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='x2apic'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='stibp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='succor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lbrv'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='custom' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Dhyana-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <memoryBacking supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='sourceType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>anonymous</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>memfd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </memoryBacking>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <disk supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='diskDevice'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>disk</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cdrom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>floppy</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>lun</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>fdc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>sata</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </disk>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <graphics supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vnc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egl-headless</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </graphics>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <video supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='modelType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vga</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cirrus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>none</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>bochs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ramfb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </video>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hostdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='mode'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>subsystem</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='startupPolicy'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>mandatory</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>requisite</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>optional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='subsysType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pci</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='capsType'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='pciBackend'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hostdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <rng supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>random</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </rng>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <filesystem supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='driverType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>path</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>handle</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtiofs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </filesystem>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tpm supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-tis</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-crb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emulator</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>external</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendVersion'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>2.0</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </tpm>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <redirdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </redirdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <channel supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </channel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <crypto supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </crypto>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <interface supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>passt</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </interface>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <panic supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>isa</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>hyperv</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </panic>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <console supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>null</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dev</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pipe</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stdio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>udp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tcp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu-vdagent</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </console>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <gic supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <vmcoreinfo supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <genid supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backingStoreInput supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backup supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <async-teardown supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <s390-pv supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <ps2 supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tdx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sev supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sgx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hyperv supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='features'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>relaxed</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vapic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>spinlocks</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vpindex</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>runtime</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>synic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stimer</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reset</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vendor_id</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>frequencies</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reenlightenment</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tlbflush</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ipi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>avic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emsr_bitmap</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>xmm_input</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <spinlocks>4095</spinlocks>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <stimer_direct>on</stimer_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hyperv>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <launchSecurity supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </features>
Jan 31 08:15:54 compute-0 nova_compute[240062]: </domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.214 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.222 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 08:15:54 compute-0 nova_compute[240062]: <domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <domain>kvm</domain>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <arch>x86_64</arch>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <vcpu max='240'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <iothreads supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <os supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='firmware'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <loader supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>rom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pflash</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='readonly'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>yes</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='secure'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </loader>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </os>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='maximum' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='maximumMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-model' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <vendor>AMD</vendor>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='x2apic'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='stibp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='succor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lbrv'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='custom' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Dhyana-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <memoryBacking supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='sourceType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>anonymous</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>memfd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </memoryBacking>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <disk supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='diskDevice'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>disk</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cdrom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>floppy</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>lun</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ide</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>fdc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>sata</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </disk>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <graphics supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vnc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egl-headless</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </graphics>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <video supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='modelType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vga</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cirrus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>none</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>bochs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ramfb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </video>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hostdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='mode'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>subsystem</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='startupPolicy'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>mandatory</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>requisite</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>optional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='subsysType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pci</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='capsType'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='pciBackend'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hostdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <rng supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>random</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </rng>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <filesystem supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='driverType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>path</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>handle</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtiofs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </filesystem>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tpm supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-tis</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-crb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emulator</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>external</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendVersion'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>2.0</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </tpm>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <redirdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </redirdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <channel supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </channel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <crypto supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </crypto>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <interface supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>passt</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </interface>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <panic supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>isa</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>hyperv</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </panic>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <console supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>null</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dev</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pipe</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stdio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>udp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tcp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu-vdagent</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </console>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <gic supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <vmcoreinfo supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <genid supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backingStoreInput supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backup supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <async-teardown supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <s390-pv supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <ps2 supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tdx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sev supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sgx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hyperv supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='features'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>relaxed</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vapic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>spinlocks</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vpindex</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>runtime</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>synic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stimer</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reset</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vendor_id</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>frequencies</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reenlightenment</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tlbflush</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ipi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>avic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emsr_bitmap</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>xmm_input</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <spinlocks>4095</spinlocks>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <stimer_direct>on</stimer_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hyperv>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <launchSecurity supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </features>
Jan 31 08:15:54 compute-0 nova_compute[240062]: </domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.290 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 08:15:54 compute-0 nova_compute[240062]: <domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <domain>kvm</domain>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <arch>x86_64</arch>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <vcpu max='4096'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <iothreads supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <os supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='firmware'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>efi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <loader supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>rom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pflash</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='readonly'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>yes</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='secure'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>yes</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>no</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </loader>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </os>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='maximum' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='maximumMigratable'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>on</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>off</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='host-model' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <vendor>AMD</vendor>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='x2apic'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='stibp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='succor'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lbrv'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <mode name='custom' supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Broadwell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ddpd-u'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sha512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm3'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sm4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Cooperlake-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Denverton-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Dhyana-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amd-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='auto-ibrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibpb-brtype'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='no-nested-data-bp'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='null-sel-clr-base'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='perfmon-v2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbpb'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='stibp-always-on'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='EPYC-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-128'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-256'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx10-512'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='prefetchiti'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Haswell-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='IvyBridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='KnightsMill-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4fmaps'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-4vnniw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512er'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512pf'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fma4'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tbm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xop'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='amx-tile'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-bf16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-fp16'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bitalg'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vbmi2'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrc'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fzrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='la57'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='taa-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='tsx-ldtrk'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='SierraForest-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ifma'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-ne-convert'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx-vnni-int8'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bhi-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='bus-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cmpccxadd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fbsdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='fsrs'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ibrs-all'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='intel-psfd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ipred-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='lam'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mcdt-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pbrsb-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='psdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rrsba-ctrl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='serialize'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vaes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='vpclmulqdq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='hle'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='rtm'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512bw'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512cd'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512dq'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512f'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='avx512vl'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='invpcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pcid'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='pku'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='mpx'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v2'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v3'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='core-capability'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='split-lock-detect'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='Snowridge-v4'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='cldemote'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='erms'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='gfni'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdir64b'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='movdiri'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='xsaves'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='athlon-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='core2duo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='coreduo-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='n270-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='ss'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <blockers model='phenom-v1'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnow'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <feature name='3dnowext'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </blockers>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </mode>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </cpu>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <memoryBacking supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <enum name='sourceType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>anonymous</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <value>memfd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </memoryBacking>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <disk supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='diskDevice'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>disk</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cdrom</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>floppy</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>lun</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>fdc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>sata</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </disk>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <graphics supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vnc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egl-headless</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </graphics>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <video supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='modelType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vga</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>cirrus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>none</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>bochs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ramfb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </video>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hostdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='mode'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>subsystem</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='startupPolicy'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>mandatory</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>requisite</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>optional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='subsysType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pci</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>scsi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='capsType'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='pciBackend'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hostdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <rng supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtio-non-transitional</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>random</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>egd</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </rng>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <filesystem supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='driverType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>path</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>handle</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>virtiofs</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </filesystem>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tpm supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-tis</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tpm-crb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emulator</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>external</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendVersion'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>2.0</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </tpm>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <redirdev supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='bus'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>usb</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </redirdev>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <channel supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </channel>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <crypto supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendModel'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>builtin</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </crypto>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <interface supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='backendType'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>default</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>passt</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </interface>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <panic supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='model'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>isa</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>hyperv</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </panic>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <console supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='type'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>null</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vc</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pty</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dev</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>file</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>pipe</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stdio</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>udp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tcp</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>unix</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>qemu-vdagent</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>dbus</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </console>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </devices>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   <features>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <gic supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <vmcoreinfo supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <genid supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backingStoreInput supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <backup supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <async-teardown supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <s390-pv supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <ps2 supported='yes'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <tdx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sev supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <sgx supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <hyperv supported='yes'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <enum name='features'>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>relaxed</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vapic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>spinlocks</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vpindex</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>runtime</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>synic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>stimer</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reset</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>vendor_id</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>frequencies</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>reenlightenment</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>tlbflush</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>ipi</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>avic</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>emsr_bitmap</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <value>xmm_input</value>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </enum>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       <defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <spinlocks>4095</spinlocks>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <stimer_direct>on</stimer_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:15:54 compute-0 nova_compute[240062]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:15:54 compute-0 nova_compute[240062]:       </defaults>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     </hyperv>
Jan 31 08:15:54 compute-0 nova_compute[240062]:     <launchSecurity supported='no'/>
Jan 31 08:15:54 compute-0 nova_compute[240062]:   </features>
Jan 31 08:15:54 compute-0 nova_compute[240062]: </domainCapabilities>
Jan 31 08:15:54 compute-0 nova_compute[240062]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.370 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.370 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.371 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.375 240090 INFO nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Secure Boot support detected
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.377 240090 INFO nova.virt.libvirt.driver [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.377 240090 INFO nova.virt.libvirt.driver [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.390 240090 DEBUG nova.virt.libvirt.driver [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.427 240090 INFO nova.virt.node [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Determined node identity 4da0c29a-ac15-4049-acad-d0fd4b82723a from /var/lib/nova/compute_id
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.464 240090 WARNING nova.compute.manager [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Compute nodes ['4da0c29a-ac15-4049-acad-d0fd4b82723a'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.501 240090 INFO nova.compute.manager [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.538 240090 WARNING nova.compute.manager [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.539 240090 DEBUG oslo_concurrency.lockutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.539 240090 DEBUG oslo_concurrency.lockutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.539 240090 DEBUG oslo_concurrency.lockutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.540 240090 DEBUG nova.compute.resource_tracker [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:15:54 compute-0 nova_compute[240062]: 2026-01-31 08:15:54.541 240090 DEBUG oslo_concurrency.processutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:15:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:15:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:15:55 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434403545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:15:55 compute-0 nova_compute[240062]: 2026-01-31 08:15:55.131 240090 DEBUG oslo_concurrency.processutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:55 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 08:15:55 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 08:15:55 compute-0 nova_compute[240062]: 2026-01-31 08:15:55.476 240090 WARNING nova.virt.libvirt.driver [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:15:55 compute-0 nova_compute[240062]: 2026-01-31 08:15:55.479 240090 DEBUG nova.compute.resource_tracker [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5060MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:15:55 compute-0 nova_compute[240062]: 2026-01-31 08:15:55.479 240090 DEBUG oslo_concurrency.lockutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:55 compute-0 nova_compute[240062]: 2026-01-31 08:15:55.479 240090 DEBUG oslo_concurrency.lockutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:15:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:55 compute-0 ceph-mon[75294]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:55 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2434403545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:15:55 compute-0 nova_compute[240062]: 2026-01-31 08:15:55.794 240090 WARNING nova.compute.resource_tracker [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] No compute node record for compute-0.ctlplane.example.com:4da0c29a-ac15-4049-acad-d0fd4b82723a: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 4da0c29a-ac15-4049-acad-d0fd4b82723a could not be found.
Jan 31 08:15:56 compute-0 nova_compute[240062]: 2026-01-31 08:15:56.182 240090 INFO nova.compute.resource_tracker [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 4da0c29a-ac15-4049-acad-d0fd4b82723a
Jan 31 08:15:56 compute-0 nova_compute[240062]: 2026-01-31 08:15:56.275 240090 DEBUG nova.compute.resource_tracker [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:15:56 compute-0 nova_compute[240062]: 2026-01-31 08:15:56.276 240090 DEBUG nova.compute.resource_tracker [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:15:56 compute-0 ceph-mon[75294]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:57 compute-0 nova_compute[240062]: 2026-01-31 08:15:57.227 240090 INFO nova.scheduler.client.report [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] [req-eda159f1-5b59-4e41-afde-955c3833ccde] Created resource provider record via placement API for resource provider with UUID 4da0c29a-ac15-4049-acad-d0fd4b82723a and name compute-0.ctlplane.example.com.
Jan 31 08:15:57 compute-0 nova_compute[240062]: 2026-01-31 08:15:57.596 240090 DEBUG oslo_concurrency.processutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:15:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201367908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.092 240090 DEBUG oslo_concurrency.processutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.096 240090 DEBUG nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 08:15:58 compute-0 nova_compute[240062]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.097 240090 INFO nova.virt.libvirt.host [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] kernel doesn't support AMD SEV
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.097 240090 DEBUG nova.compute.provider_tree [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.098 240090 DEBUG nova.virt.libvirt.driver [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.163 240090 DEBUG nova.scheduler.client.report [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Updated inventory for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.164 240090 DEBUG nova.compute.provider_tree [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Updating resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.164 240090 DEBUG nova.compute.provider_tree [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.281 240090 DEBUG nova.compute.provider_tree [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Updating resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.330 240090 DEBUG nova.compute.resource_tracker [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.330 240090 DEBUG oslo_concurrency.lockutils [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.331 240090 DEBUG nova.service [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.422 240090 DEBUG nova.service [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 31 08:15:58 compute-0 nova_compute[240062]: 2026-01-31 08:15:58.422 240090 DEBUG nova.servicegroup.drivers.db [None req-a18e5138-8181-46fb-be0a-b19dff85d1df - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 31 08:15:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:59 compute-0 ceph-mon[75294]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1201367908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:15:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:01 compute-0 ceph-mon[75294]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:02 compute-0 ceph-mon[75294]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:05 compute-0 ceph-mon[75294]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:16:06 compute-0 ceph-mon[75294]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:08 compute-0 ceph-mon[75294]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:10 compute-0 ceph-mon[75294]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:12 compute-0 podman[240660]: 2026-01-31 08:16:12.189426624 +0000 UTC m=+0.061796403 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:16:13 compute-0 ceph-mon[75294]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:14 compute-0 ceph-mon[75294]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:17 compute-0 ceph-mon[75294]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:17 compute-0 podman[240681]: 2026-01-31 08:16:17.233642213 +0000 UTC m=+0.099605602 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:16:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:16:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3716767353' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:16:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:16:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3716767353' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:16:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:16:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1626988391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:16:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:16:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1626988391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:16:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808238841' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:16:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808238841' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:19 compute-0 ceph-mon[75294]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3716767353' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3716767353' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1626988391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1626988391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2808238841' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2808238841' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:16:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:20 compute-0 ceph-mon[75294]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:23 compute-0 ceph-mon[75294]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:16:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 3279 writes, 14K keys, 3279 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3278 writes, 3278 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1209 writes, 5329 keys, 1209 commit groups, 1.0 writes per commit group, ingest: 8.07 MB, 0.01 MB/s
                                           Interval WAL: 1208 writes, 1208 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.2      0.20              0.03         6    0.034       0      0       0.0       0.0
                                             L6      1/0    7.49 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.5    156.2    130.7      0.28              0.07         5    0.055     19K   2196       0.0       0.0
                                            Sum      1/0    7.49 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.5     89.8    105.4      0.48              0.10        11    0.044     19K   2196       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    114.3    117.0      0.24              0.05         6    0.040     12K   1449       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    156.2    130.7      0.28              0.07         5    0.055     19K   2196       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     97.6      0.15              0.03         5    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cc8bf858d0#2 capacity: 304.00 MB usage: 1.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(88,1.35 MB,0.445692%) FilterBlock(12,63.55 KB,0.0204136%) IndexBlock(12,130.41 KB,0.0418914%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:16:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:30 compute-0 ceph-mds[96942]: mds.beacon.cephfs.compute-0.xdvglw missed beacon ack from the monitors
Jan 31 08:16:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 1e+01 seconds
Jan 31 08:16:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:31 compute-0 ceph-mon[75294]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:31 compute-0 ceph-mon[75294]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:31 compute-0 ceph-mon[75294]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:31 compute-0 ceph-mon[75294]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:33 compute-0 ceph-mon[75294]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:35 compute-0 ceph-mon[75294]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.079901) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847396080195, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1404, "num_deletes": 506, "total_data_size": 1707742, "memory_usage": 1735312, "flush_reason": "Manual Compaction"}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847396172902, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1680330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13636, "largest_seqno": 15039, "table_properties": {"data_size": 1674149, "index_size": 2936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 15829, "raw_average_key_size": 18, "raw_value_size": 1659670, "raw_average_value_size": 1923, "num_data_blocks": 134, "num_entries": 863, "num_filter_entries": 863, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847247, "oldest_key_time": 1769847247, "file_creation_time": 1769847396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 92956 microseconds, and 3906 cpu microseconds.
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.172964) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1680330 bytes OK
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.172982) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.232211) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.232261) EVENT_LOG_v1 {"time_micros": 1769847396232251, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.232286) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1700360, prev total WAL file size 1700360, number of live WAL files 2.
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.232864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1640KB)], [32(7671KB)]
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847396232909, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9535858, "oldest_snapshot_seqno": -1}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3905 keys, 7683134 bytes, temperature: kUnknown
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847396603327, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7683134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7654830, "index_size": 17442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 95629, "raw_average_key_size": 24, "raw_value_size": 7581971, "raw_average_value_size": 1941, "num_data_blocks": 738, "num_entries": 3905, "num_filter_entries": 3905, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769847396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.603586) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7683134 bytes
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.891157) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.7 rd, 20.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.2) write-amplify(4.6) OK, records in: 4930, records dropped: 1025 output_compression: NoCompression
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.891222) EVENT_LOG_v1 {"time_micros": 1769847396891197, "job": 14, "event": "compaction_finished", "compaction_time_micros": 370478, "compaction_time_cpu_micros": 15905, "output_level": 6, "num_output_files": 1, "total_output_size": 7683134, "num_input_records": 4930, "num_output_records": 3905, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847396891881, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847396893138, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.232787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.893325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.893334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.893337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.893341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:36 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:16:36.893344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:37 compute-0 ceph-mon[75294]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:39 compute-0 ceph-mon[75294]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:40 compute-0 ceph-mon[75294]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:43 compute-0 ceph-mon[75294]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:43 compute-0 podman[240707]: 2026-01-31 08:16:43.179042734 +0000 UTC m=+0.050540842 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:16:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:45 compute-0 ceph-mon[75294]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:46 compute-0 nova_compute[240062]: 2026-01-31 08:16:46.424 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:46 compute-0 nova_compute[240062]: 2026-01-31 08:16:46.670 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:16:46.957 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:16:46.958 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:16:46.958 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:47 compute-0 ceph-mon[75294]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:48 compute-0 podman[240726]: 2026-01-31 08:16:48.182288316 +0000 UTC m=+0.059883737 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:16:49 compute-0 ceph-mon[75294]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:50 compute-0 ceph-mon[75294]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:16:50
Jan 31 08:16:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:16:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:16:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'volumes']
Jan 31 08:16:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:16:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.157 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.157 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:16:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 08:16:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/962710290' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:16:52 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14330 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:16:52 compute-0 ceph-mgr[75591]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:16:52 compute-0 ceph-mgr[75591]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.229 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.229 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.229 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.229 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.229 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.230 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.230 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.230 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.230 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.353 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.354 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.354 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.354 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.355 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:16:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786234876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:16:52 compute-0 nova_compute[240062]: 2026-01-31 08:16:52.859 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.012 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.013 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.013 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.014 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:53 compute-0 ceph-mon[75294]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:53 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/962710290' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:16:53 compute-0 ceph-mon[75294]: from='client.14330 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:16:53 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2786234876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.400 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.401 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.421 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:53 compute-0 sudo[240795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:53 compute-0 sudo[240795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:53 compute-0 sudo[240795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:53 compute-0 sudo[240820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:16:53 compute-0 sudo[240820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:16:53 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/157944757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.930 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:53 compute-0 nova_compute[240062]: 2026-01-31 08:16:53.936 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:16:54 compute-0 nova_compute[240062]: 2026-01-31 08:16:54.025 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:16:54 compute-0 sudo[240820]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:16:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:16:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:16:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:16:54 compute-0 nova_compute[240062]: 2026-01-31 08:16:54.271 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:16:54 compute-0 nova_compute[240062]: 2026-01-31 08:16:54.271 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:16:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:16:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:16:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:16:54 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:16:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:16:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:54 compute-0 sudo[240878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:54 compute-0 sudo[240878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:54 compute-0 sudo[240878]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:54 compute-0 sudo[240903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:16:54 compute-0 sudo[240903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:54 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/157944757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:16:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:16:55 compute-0 podman[240940]: 2026-01-31 08:16:55.042900747 +0000 UTC m=+0.024260794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:55 compute-0 podman[240940]: 2026-01-31 08:16:55.270444713 +0000 UTC m=+0.251804760 container create 2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:55 compute-0 systemd[1]: Started libpod-conmon-2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668.scope.
Jan 31 08:16:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:16:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:55 compute-0 podman[240940]: 2026-01-31 08:16:55.845104951 +0000 UTC m=+0.826465018 container init 2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jennings, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:16:55 compute-0 podman[240940]: 2026-01-31 08:16:55.855244398 +0000 UTC m=+0.836604435 container start 2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jennings, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:16:55 compute-0 goofy_jennings[240956]: 167 167
Jan 31 08:16:55 compute-0 systemd[1]: libpod-2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668.scope: Deactivated successfully.
Jan 31 08:16:56 compute-0 podman[240940]: 2026-01-31 08:16:56.181299075 +0000 UTC m=+1.162659122 container attach 2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:16:56 compute-0 podman[240940]: 2026-01-31 08:16:56.181628534 +0000 UTC m=+1.162988561 container died 2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:16:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:56 compute-0 ceph-mon[75294]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:56 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:16:56 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:16:56 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:16:56 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-28d5046012d6f090b0f774aad8fda6d327207613f4ce010fd5c98cf58f6f2674-merged.mount: Deactivated successfully.
Jan 31 08:16:57 compute-0 podman[240940]: 2026-01-31 08:16:57.320534276 +0000 UTC m=+2.301894323 container remove 2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jennings, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:16:57 compute-0 systemd[1]: libpod-conmon-2eb729644ef71f8e7ea3406195c700cbc6bb956082cc57c4295c48cb25571668.scope: Deactivated successfully.
Jan 31 08:16:57 compute-0 podman[240981]: 2026-01-31 08:16:57.426532942 +0000 UTC m=+0.024914422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:57 compute-0 podman[240981]: 2026-01-31 08:16:57.52275679 +0000 UTC m=+0.121138260 container create b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:16:57 compute-0 systemd[1]: Started libpod-conmon-b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b.scope.
Jan 31 08:16:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread fragmentation_score=0.000139 took=0.000033s
Jan 31 08:16:57 compute-0 ceph-osd[86929]: bluestore.MempoolThread fragmentation_score=0.000119 took=0.000016s
Jan 31 08:16:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:57 compute-0 ceph-mon[75294]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d590fda26d00b930fec8f725e6a6608ce7c10149cdfbd578178479a8d90cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d590fda26d00b930fec8f725e6a6608ce7c10149cdfbd578178479a8d90cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d590fda26d00b930fec8f725e6a6608ce7c10149cdfbd578178479a8d90cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d590fda26d00b930fec8f725e6a6608ce7c10149cdfbd578178479a8d90cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25d590fda26d00b930fec8f725e6a6608ce7c10149cdfbd578178479a8d90cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:57 compute-0 ceph-osd[85864]: bluestore.MempoolThread fragmentation_score=0.000117 took=0.000019s
Jan 31 08:16:57 compute-0 podman[240981]: 2026-01-31 08:16:57.663237648 +0000 UTC m=+0.261619118 container init b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:16:57 compute-0 podman[240981]: 2026-01-31 08:16:57.669086738 +0000 UTC m=+0.267468188 container start b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:16:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:57 compute-0 podman[240981]: 2026-01-31 08:16:57.698271795 +0000 UTC m=+0.296653245 container attach b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:16:58 compute-0 zen_roentgen[240997]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:16:58 compute-0 zen_roentgen[240997]: --> All data devices are unavailable
Jan 31 08:16:58 compute-0 systemd[1]: libpod-b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b.scope: Deactivated successfully.
Jan 31 08:16:58 compute-0 podman[240981]: 2026-01-31 08:16:58.049445907 +0000 UTC m=+0.647827357 container died b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e25d590fda26d00b930fec8f725e6a6608ce7c10149cdfbd578178479a8d90cd-merged.mount: Deactivated successfully.
Jan 31 08:16:58 compute-0 podman[240981]: 2026-01-31 08:16:58.408855585 +0000 UTC m=+1.007237035 container remove b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:16:58 compute-0 systemd[1]: libpod-conmon-b4172b51a686792ad1a20e1a5c0f2639d80f6949ea51738dd6699d451345491b.scope: Deactivated successfully.
Jan 31 08:16:58 compute-0 sudo[240903]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:58 compute-0 sudo[241031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:58 compute-0 sudo[241031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:58 compute-0 sudo[241031]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:58 compute-0 sudo[241056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:16:58 compute-0 sudo[241056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:58 compute-0 ceph-mon[75294]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:58 compute-0 podman[241093]: 2026-01-31 08:16:58.799343112 +0000 UTC m=+0.025592430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:58 compute-0 podman[241093]: 2026-01-31 08:16:58.906455889 +0000 UTC m=+0.132705187 container create d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rubin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:16:58 compute-0 systemd[1]: Started libpod-conmon-d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573.scope.
Jan 31 08:16:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:59 compute-0 podman[241093]: 2026-01-31 08:16:59.211911543 +0000 UTC m=+0.438160861 container init d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rubin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:16:59 compute-0 podman[241093]: 2026-01-31 08:16:59.216154479 +0000 UTC m=+0.442403777 container start d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:16:59 compute-0 objective_rubin[241109]: 167 167
Jan 31 08:16:59 compute-0 systemd[1]: libpod-d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573.scope: Deactivated successfully.
Jan 31 08:16:59 compute-0 conmon[241109]: conmon d951f1a392934611c0f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573.scope/container/memory.events
Jan 31 08:16:59 compute-0 podman[241093]: 2026-01-31 08:16:59.393206976 +0000 UTC m=+0.619456274 container attach d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:16:59 compute-0 podman[241093]: 2026-01-31 08:16:59.393609177 +0000 UTC m=+0.619858475 container died d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:16:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-231fabdd33a8289bed13268c1f484dcb9ac908250d194757a4469d0fb9a48172-merged.mount: Deactivated successfully.
Jan 31 08:17:00 compute-0 podman[241093]: 2026-01-31 08:17:00.276440102 +0000 UTC m=+1.502689410 container remove d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:17:00 compute-0 systemd[1]: libpod-conmon-d951f1a392934611c0f3670ddcee5891c11e99fd8e44d3a07d7fe899114fd573.scope: Deactivated successfully.
Jan 31 08:17:00 compute-0 podman[241133]: 2026-01-31 08:17:00.405278081 +0000 UTC m=+0.023311938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:00 compute-0 podman[241133]: 2026-01-31 08:17:00.522151025 +0000 UTC m=+0.140184882 container create ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:17:00 compute-0 systemd[1]: Started libpod-conmon-ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637.scope.
Jan 31 08:17:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e49f7531351fdc8655fd816cc5250d8b0de423942da6c078418c3ad1b520b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e49f7531351fdc8655fd816cc5250d8b0de423942da6c078418c3ad1b520b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e49f7531351fdc8655fd816cc5250d8b0de423942da6c078418c3ad1b520b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e49f7531351fdc8655fd816cc5250d8b0de423942da6c078418c3ad1b520b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:00 compute-0 podman[241133]: 2026-01-31 08:17:00.863768067 +0000 UTC m=+0.481801924 container init ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:00 compute-0 podman[241133]: 2026-01-31 08:17:00.869070151 +0000 UTC m=+0.487103988 container start ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:17:00 compute-0 podman[241133]: 2026-01-31 08:17:00.941629673 +0000 UTC m=+0.559663520 container attach ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]: {
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:     "0": [
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:         {
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "devices": [
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "/dev/loop3"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             ],
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_name": "ceph_lv0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_size": "21470642176",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "name": "ceph_lv0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "tags": {
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cluster_name": "ceph",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.crush_device_class": "",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.encrypted": "0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.objectstore": "bluestore",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osd_id": "0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.type": "block",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.vdo": "0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.with_tpm": "0"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             },
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "type": "block",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "vg_name": "ceph_vg0"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:         }
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:     ],
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:     "1": [
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:         {
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "devices": [
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "/dev/loop4"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             ],
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_name": "ceph_lv1",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_size": "21470642176",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "name": "ceph_lv1",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "tags": {
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cluster_name": "ceph",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.crush_device_class": "",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.encrypted": "0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.objectstore": "bluestore",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osd_id": "1",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.type": "block",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.vdo": "0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.with_tpm": "0"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             },
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "type": "block",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "vg_name": "ceph_vg1"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:         }
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:     ],
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:     "2": [
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:         {
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "devices": [
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "/dev/loop5"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             ],
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_name": "ceph_lv2",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_size": "21470642176",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "name": "ceph_lv2",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "tags": {
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.cluster_name": "ceph",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.crush_device_class": "",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.encrypted": "0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.objectstore": "bluestore",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osd_id": "2",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.type": "block",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.vdo": "0",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:                 "ceph.with_tpm": "0"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             },
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "type": "block",
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:             "vg_name": "ceph_vg2"
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:         }
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]:     ]
Jan 31 08:17:01 compute-0 vibrant_neumann[241149]: }
Jan 31 08:17:01 compute-0 ceph-mon[75294]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:01 compute-0 systemd[1]: libpod-ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637.scope: Deactivated successfully.
Jan 31 08:17:01 compute-0 podman[241133]: 2026-01-31 08:17:01.149680156 +0000 UTC m=+0.767713993 container died ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-de8e49f7531351fdc8655fd816cc5250d8b0de423942da6c078418c3ad1b520b-merged.mount: Deactivated successfully.
Jan 31 08:17:01 compute-0 podman[241133]: 2026-01-31 08:17:01.36579647 +0000 UTC m=+0.983830347 container remove ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_neumann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:17:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:01 compute-0 sudo[241056]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:01 compute-0 systemd[1]: libpod-conmon-ed9f87de7249368b1aadac18d24711b14486a13a83adadbd400ef444dc194637.scope: Deactivated successfully.
Jan 31 08:17:01 compute-0 sudo[241169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:01 compute-0 sudo[241169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:01 compute-0 sudo[241169]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:01 compute-0 sudo[241194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:17:01 compute-0 sudo[241194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:01 compute-0 podman[241232]: 2026-01-31 08:17:01.806812937 +0000 UTC m=+0.081343533 container create 46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 08:17:01 compute-0 podman[241232]: 2026-01-31 08:17:01.74612924 +0000 UTC m=+0.020659856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:01 compute-0 systemd[1]: Started libpod-conmon-46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615.scope.
Jan 31 08:17:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:02 compute-0 podman[241232]: 2026-01-31 08:17:02.04741479 +0000 UTC m=+0.321945396 container init 46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:17:02 compute-0 podman[241232]: 2026-01-31 08:17:02.05182327 +0000 UTC m=+0.326353856 container start 46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:02 compute-0 systemd[1]: libpod-46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615.scope: Deactivated successfully.
Jan 31 08:17:02 compute-0 elastic_meninsky[241249]: 167 167
Jan 31 08:17:02 compute-0 conmon[241249]: conmon 46620d2bc53b8d98b2b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615.scope/container/memory.events
Jan 31 08:17:02 compute-0 podman[241232]: 2026-01-31 08:17:02.08112019 +0000 UTC m=+0.355650796 container attach 46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:17:02 compute-0 podman[241232]: 2026-01-31 08:17:02.081416618 +0000 UTC m=+0.355947204 container died 46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e71e93d8f29d00520d4909b0936a96a9ed835bf3f0bc2e049fc6eec5d81773-merged.mount: Deactivated successfully.
Jan 31 08:17:02 compute-0 podman[241232]: 2026-01-31 08:17:02.456312039 +0000 UTC m=+0.730842625 container remove 46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:02 compute-0 systemd[1]: libpod-conmon-46620d2bc53b8d98b2b7040110ee90e0c0cc3b46df46e3b6f925ea5588b96615.scope: Deactivated successfully.
Jan 31 08:17:02 compute-0 podman[241272]: 2026-01-31 08:17:02.585400075 +0000 UTC m=+0.032392516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:02 compute-0 podman[241272]: 2026-01-31 08:17:02.763429879 +0000 UTC m=+0.210422230 container create 826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bardeen, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:17:02 compute-0 systemd[1]: Started libpod-conmon-826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b.scope.
Jan 31 08:17:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8231f903e1a684e885df8a39a3011febedce9763ec679d14579a9f582c79c495/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8231f903e1a684e885df8a39a3011febedce9763ec679d14579a9f582c79c495/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8231f903e1a684e885df8a39a3011febedce9763ec679d14579a9f582c79c495/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8231f903e1a684e885df8a39a3011febedce9763ec679d14579a9f582c79c495/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:02 compute-0 podman[241272]: 2026-01-31 08:17:02.991161 +0000 UTC m=+0.438153371 container init 826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:17:03 compute-0 podman[241272]: 2026-01-31 08:17:03.036758364 +0000 UTC m=+0.483750765 container start 826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bardeen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:17:03 compute-0 podman[241272]: 2026-01-31 08:17:03.389367017 +0000 UTC m=+0.836359388 container attach 826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:17:03 compute-0 ceph-mon[75294]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:03 compute-0 lvm[241368]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:17:03 compute-0 lvm[241367]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:17:03 compute-0 lvm[241368]: VG ceph_vg0 finished
Jan 31 08:17:03 compute-0 lvm[241367]: VG ceph_vg1 finished
Jan 31 08:17:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:03 compute-0 lvm[241370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:17:03 compute-0 lvm[241370]: VG ceph_vg2 finished
Jan 31 08:17:03 compute-0 reverent_bardeen[241289]: {}
Jan 31 08:17:03 compute-0 systemd[1]: libpod-826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b.scope: Deactivated successfully.
Jan 31 08:17:03 compute-0 podman[241272]: 2026-01-31 08:17:03.805013331 +0000 UTC m=+1.252005682 container died 826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:17:03 compute-0 systemd[1]: libpod-826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b.scope: Consumed 1.146s CPU time.
Jan 31 08:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8231f903e1a684e885df8a39a3011febedce9763ec679d14579a9f582c79c495-merged.mount: Deactivated successfully.
Jan 31 08:17:04 compute-0 podman[241272]: 2026-01-31 08:17:04.77802035 +0000 UTC m=+2.225012731 container remove 826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bardeen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:17:04 compute-0 sudo[241194]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:17:04 compute-0 systemd[1]: libpod-conmon-826c572e600a8703f7c21c0316599abbd20ad3cba1086fbcba3e71d52b771d9b.scope: Deactivated successfully.
Jan 31 08:17:04 compute-0 ceph-mon[75294]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:17:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:17:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:17:05 compute-0 sudo[241386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:17:05 compute-0 sudo[241386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:05 compute-0 sudo[241386]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:17:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:17:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:07 compute-0 ceph-mon[75294]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:09 compute-0 ceph-mon[75294]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:10 compute-0 ceph-mon[75294]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:12 compute-0 ceph-mon[75294]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:14 compute-0 podman[241411]: 2026-01-31 08:17:14.178631615 +0000 UTC m=+0.051784265 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:17:14 compute-0 ceph-mon[75294]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 08:17:16 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/820849575' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:17:16 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:17:16 compute-0 ceph-mgr[75591]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:17:16 compute-0 ceph-mgr[75591]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:17:16 compute-0 ceph-mon[75294]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:16 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/820849575' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:17:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:18 compute-0 ceph-mon[75294]: from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:17:19 compute-0 podman[241431]: 2026-01-31 08:17:19.218519511 +0000 UTC m=+0.089417474 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:17:19 compute-0 ceph-mon[75294]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:21 compute-0 ceph-mon[75294]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:22 compute-0 ceph-mon[75294]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:25 compute-0 ceph-mon[75294]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:27 compute-0 ceph-mon[75294]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:29 compute-0 ceph-mon[75294]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:31 compute-0 ceph-mon[75294]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:32 compute-0 ceph-mon[75294]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:35 compute-0 ceph-mon[75294]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:37 compute-0 ceph-mon[75294]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:17:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2967555154' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:17:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:17:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2967555154' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:17:39 compute-0 ceph-mon[75294]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2967555154' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:17:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2967555154' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:17:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:40 compute-0 ceph-mon[75294]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:43 compute-0 ceph-mon[75294]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:45 compute-0 podman[241457]: 2026-01-31 08:17:45.205821822 +0000 UTC m=+0.072517072 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 08:17:45 compute-0 ceph-mon[75294]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:17:46.958 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:17:46.959 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:17:46.959 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:47 compute-0 ceph-mon[75294]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:48 compute-0 ceph-mon[75294]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:50 compute-0 podman[241477]: 2026-01-31 08:17:50.214567414 +0000 UTC m=+0.085999810 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 08:17:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:17:50
Jan 31 08:17:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:17:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:17:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images', '.rgw.root', 'default.rgw.meta']
Jan 31 08:17:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:17:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:51 compute-0 ceph-mon[75294]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:52 compute-0 ceph-mon[75294]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:53 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:17:53.909 155810 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:b9:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:58:2f:a4:b2:e1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:53 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:17:53.910 155810 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:17:53 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:17:53.911 155810 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41f56c18-6e96-48c3-b4a0-6aca47eb55b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.264 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.264 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.288 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.289 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.289 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.300 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.300 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.301 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.301 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.301 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.301 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.302 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.302 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.302 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.325 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.326 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.326 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.326 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.326 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:17:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/227437440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:17:54 compute-0 nova_compute[240062]: 2026-01-31 08:17:54.876 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:55 compute-0 ceph-mon[75294]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:55 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/227437440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.030 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.031 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.031 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.032 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.096 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.096 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.117 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:17:55 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4034511092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.635 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.638 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.654 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.655 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:17:55 compute-0 nova_compute[240062]: 2026-01-31 08:17:55.655 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:56 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4034511092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:17:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:57 compute-0 ceph-mon[75294]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:59 compute-0 ceph-mon[75294]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:00 compute-0 ceph-mon[75294]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:02 compute-0 ceph-mon[75294]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:05 compute-0 ceph-mon[75294]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:05 compute-0 sudo[241547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:05 compute-0 sudo[241547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:05 compute-0 sudo[241547]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:05 compute-0 sudo[241572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:18:05 compute-0 sudo[241572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:05 compute-0 sudo[241572]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:18:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:18:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:05 compute-0 sudo[241618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:05 compute-0 sudo[241618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:05 compute-0 sudo[241618]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:05 compute-0 sudo[241643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:18:05 compute-0 sudo[241643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:18:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:06 compute-0 sudo[241643]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:18:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:18:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:18:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:18:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:18:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:18:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:18:07 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:18:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:18:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:07 compute-0 sudo[241699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:07 compute-0 sudo[241699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:07 compute-0 sudo[241699]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:07 compute-0 sudo[241724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:18:07 compute-0 sudo[241724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:07 compute-0 ceph-mon[75294]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:18:07 compute-0 podman[241761]: 2026-01-31 08:18:07.509460397 +0000 UTC m=+0.019757540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:07 compute-0 podman[241761]: 2026-01-31 08:18:07.670066934 +0000 UTC m=+0.180364047 container create af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:07 compute-0 systemd[1]: Started libpod-conmon-af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486.scope.
Jan 31 08:18:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:08 compute-0 podman[241761]: 2026-01-31 08:18:08.036256508 +0000 UTC m=+0.546553651 container init af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:18:08 compute-0 podman[241761]: 2026-01-31 08:18:08.04181812 +0000 UTC m=+0.552115233 container start af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:08 compute-0 systemd[1]: libpod-af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486.scope: Deactivated successfully.
Jan 31 08:18:08 compute-0 vigorous_kalam[241778]: 167 167
Jan 31 08:18:08 compute-0 conmon[241778]: conmon af9963bdd4895b28d636 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486.scope/container/memory.events
Jan 31 08:18:08 compute-0 podman[241761]: 2026-01-31 08:18:08.110424004 +0000 UTC m=+0.620721117 container attach af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:08 compute-0 podman[241761]: 2026-01-31 08:18:08.110936938 +0000 UTC m=+0.621234071 container died af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-662b4ce96aaf16b358701b8c686893c4f98f59b55929df7f09a160487b61463c-merged.mount: Deactivated successfully.
Jan 31 08:18:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:18:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:18:08 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:09 compute-0 podman[241761]: 2026-01-31 08:18:09.638115506 +0000 UTC m=+2.148412619 container remove af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:18:09 compute-0 ceph-mon[75294]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:09 compute-0 systemd[1]: libpod-conmon-af9963bdd4895b28d6362dba1c06aab1864a1ae70193ba2539d973fefd594486.scope: Deactivated successfully.
Jan 31 08:18:09 compute-0 podman[241801]: 2026-01-31 08:18:09.754198027 +0000 UTC m=+0.024794819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:09 compute-0 podman[241801]: 2026-01-31 08:18:09.987004816 +0000 UTC m=+0.257601558 container create 3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:18:10 compute-0 systemd[1]: Started libpod-conmon-3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd.scope.
Jan 31 08:18:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126348bed457a6ff4e13d85c66346719876320bb501eb4839275649ab5c0ec3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126348bed457a6ff4e13d85c66346719876320bb501eb4839275649ab5c0ec3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126348bed457a6ff4e13d85c66346719876320bb501eb4839275649ab5c0ec3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126348bed457a6ff4e13d85c66346719876320bb501eb4839275649ab5c0ec3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126348bed457a6ff4e13d85c66346719876320bb501eb4839275649ab5c0ec3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:10 compute-0 podman[241801]: 2026-01-31 08:18:10.449464379 +0000 UTC m=+0.720061131 container init 3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:18:10 compute-0 podman[241801]: 2026-01-31 08:18:10.458924637 +0000 UTC m=+0.729521349 container start 3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:10 compute-0 podman[241801]: 2026-01-31 08:18:10.527970134 +0000 UTC m=+0.798566866 container attach 3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:18:10 compute-0 ceph-mon[75294]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:10 compute-0 romantic_tesla[241818]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:18:10 compute-0 romantic_tesla[241818]: --> All data devices are unavailable
Jan 31 08:18:10 compute-0 systemd[1]: libpod-3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd.scope: Deactivated successfully.
Jan 31 08:18:10 compute-0 podman[241801]: 2026-01-31 08:18:10.893833308 +0000 UTC m=+1.164430090 container died 3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-126348bed457a6ff4e13d85c66346719876320bb501eb4839275649ab5c0ec3f-merged.mount: Deactivated successfully.
Jan 31 08:18:11 compute-0 podman[241801]: 2026-01-31 08:18:11.329483519 +0000 UTC m=+1.600080241 container remove 3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:18:11 compute-0 systemd[1]: libpod-conmon-3ff335fd47334a9e251a82e87e52796041b1d46f3353a77551db9e0723857afd.scope: Deactivated successfully.
Jan 31 08:18:11 compute-0 sudo[241724]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:11 compute-0 sudo[241851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:11 compute-0 sudo[241851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:11 compute-0 sudo[241851]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:11 compute-0 sudo[241876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:18:11 compute-0 sudo[241876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:11 compute-0 podman[241913]: 2026-01-31 08:18:11.786499773 +0000 UTC m=+0.080930932 container create 761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:18:11 compute-0 podman[241913]: 2026-01-31 08:18:11.725573269 +0000 UTC m=+0.020004438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:11 compute-0 systemd[1]: Started libpod-conmon-761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b.scope.
Jan 31 08:18:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:11 compute-0 podman[241913]: 2026-01-31 08:18:11.930930119 +0000 UTC m=+0.225361318 container init 761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:11 compute-0 podman[241913]: 2026-01-31 08:18:11.935306888 +0000 UTC m=+0.229738047 container start 761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:18:11 compute-0 trusting_jang[241930]: 167 167
Jan 31 08:18:11 compute-0 systemd[1]: libpod-761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b.scope: Deactivated successfully.
Jan 31 08:18:12 compute-0 podman[241913]: 2026-01-31 08:18:12.040703207 +0000 UTC m=+0.335134416 container attach 761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:18:12 compute-0 podman[241913]: 2026-01-31 08:18:12.041355894 +0000 UTC m=+0.335787053 container died 761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:18:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec65d3d3d5570f459913671d7a9df9bda38fd7ff99ba3a9b5f824ee1edcd65b2-merged.mount: Deactivated successfully.
Jan 31 08:18:12 compute-0 podman[241913]: 2026-01-31 08:18:12.545772434 +0000 UTC m=+0.840203583 container remove 761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_jang, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:12 compute-0 systemd[1]: libpod-conmon-761eff3ef0171ddaf54d9ac8ec99f7170d03a7f06c7f2d96ada8eec24b000a4b.scope: Deactivated successfully.
Jan 31 08:18:12 compute-0 podman[241954]: 2026-01-31 08:18:12.677495552 +0000 UTC m=+0.018555708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:12 compute-0 podman[241954]: 2026-01-31 08:18:12.808016288 +0000 UTC m=+0.149076444 container create 9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jepsen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:18:12 compute-0 systemd[1]: Started libpod-conmon-9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776.scope.
Jan 31 08:18:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb4ea0e5cc87e4ca4fcaa91180e0dfc00e262dcaa1b22dc03900cf4711d4b61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb4ea0e5cc87e4ca4fcaa91180e0dfc00e262dcaa1b22dc03900cf4711d4b61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb4ea0e5cc87e4ca4fcaa91180e0dfc00e262dcaa1b22dc03900cf4711d4b61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb4ea0e5cc87e4ca4fcaa91180e0dfc00e262dcaa1b22dc03900cf4711d4b61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:13 compute-0 podman[241954]: 2026-01-31 08:18:13.118080657 +0000 UTC m=+0.459140843 container init 9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:18:13 compute-0 podman[241954]: 2026-01-31 08:18:13.125310075 +0000 UTC m=+0.466370221 container start 9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:13 compute-0 ceph-mon[75294]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:13 compute-0 podman[241954]: 2026-01-31 08:18:13.353273472 +0000 UTC m=+0.694333628 container attach 9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jepsen, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]: {
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:     "0": [
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:         {
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "devices": [
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "/dev/loop3"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             ],
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_name": "ceph_lv0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_size": "21470642176",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "name": "ceph_lv0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "tags": {
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cluster_name": "ceph",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.crush_device_class": "",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.encrypted": "0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.objectstore": "bluestore",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osd_id": "0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.type": "block",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.vdo": "0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.with_tpm": "0"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             },
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "type": "block",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "vg_name": "ceph_vg0"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:         }
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:     ],
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:     "1": [
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:         {
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "devices": [
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "/dev/loop4"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             ],
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_name": "ceph_lv1",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_size": "21470642176",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "name": "ceph_lv1",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "tags": {
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cluster_name": "ceph",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.crush_device_class": "",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.encrypted": "0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.objectstore": "bluestore",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osd_id": "1",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.type": "block",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.vdo": "0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.with_tpm": "0"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             },
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "type": "block",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "vg_name": "ceph_vg1"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:         }
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:     ],
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:     "2": [
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:         {
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "devices": [
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "/dev/loop5"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             ],
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_name": "ceph_lv2",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_size": "21470642176",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "name": "ceph_lv2",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "tags": {
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.cluster_name": "ceph",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.crush_device_class": "",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.encrypted": "0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.objectstore": "bluestore",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osd_id": "2",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.type": "block",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.vdo": "0",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:                 "ceph.with_tpm": "0"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             },
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "type": "block",
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:             "vg_name": "ceph_vg2"
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:         }
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]:     ]
Jan 31 08:18:13 compute-0 friendly_jepsen[241970]: }
Jan 31 08:18:13 compute-0 systemd[1]: libpod-9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776.scope: Deactivated successfully.
Jan 31 08:18:13 compute-0 podman[241954]: 2026-01-31 08:18:13.417224589 +0000 UTC m=+0.758284785 container died 9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jepsen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:18:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb4ea0e5cc87e4ca4fcaa91180e0dfc00e262dcaa1b22dc03900cf4711d4b61-merged.mount: Deactivated successfully.
Jan 31 08:18:14 compute-0 podman[241954]: 2026-01-31 08:18:14.036178637 +0000 UTC m=+1.377238793 container remove 9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jepsen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:14 compute-0 systemd[1]: libpod-conmon-9f905ea8a76dbdf6ea06b32d3f8162ef4f72c372918a4b4459d75f2f60af1776.scope: Deactivated successfully.
Jan 31 08:18:14 compute-0 sudo[241876]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:14 compute-0 sudo[241991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:14 compute-0 sudo[241991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:14 compute-0 sudo[241991]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:14 compute-0 sudo[242016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:18:14 compute-0 sudo[242016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:14 compute-0 podman[242053]: 2026-01-31 08:18:14.383475574 +0000 UTC m=+0.024665584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:14 compute-0 podman[242053]: 2026-01-31 08:18:14.545632874 +0000 UTC m=+0.186822864 container create 7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:18:14 compute-0 systemd[1]: Started libpod-conmon-7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b.scope.
Jan 31 08:18:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:14 compute-0 podman[242053]: 2026-01-31 08:18:14.840577651 +0000 UTC m=+0.481767661 container init 7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kalam, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:18:14 compute-0 podman[242053]: 2026-01-31 08:18:14.848876257 +0000 UTC m=+0.490066257 container start 7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:14 compute-0 loving_kalam[242070]: 167 167
Jan 31 08:18:14 compute-0 systemd[1]: libpod-7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b.scope: Deactivated successfully.
Jan 31 08:18:14 compute-0 podman[242053]: 2026-01-31 08:18:14.947327767 +0000 UTC m=+0.588517777 container attach 7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kalam, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:18:14 compute-0 podman[242053]: 2026-01-31 08:18:14.94783399 +0000 UTC m=+0.589023980 container died 7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kalam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c54c9567555cda8e0fff95a95f44e559e08105b596fa4085b137495b28f64e-merged.mount: Deactivated successfully.
Jan 31 08:18:15 compute-0 ceph-mon[75294]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:16 compute-0 podman[242053]: 2026-01-31 08:18:16.094156174 +0000 UTC m=+1.735346204 container remove 7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:16 compute-0 systemd[1]: libpod-conmon-7c9df6f9a2990ed815c2b59ee671ec45030954cc9851e61349ff6039b6caec3b.scope: Deactivated successfully.
Jan 31 08:18:16 compute-0 podman[242087]: 2026-01-31 08:18:16.172088634 +0000 UTC m=+0.918464211 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:18:16 compute-0 podman[242112]: 2026-01-31 08:18:16.205852306 +0000 UTC m=+0.022870837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:16 compute-0 podman[242112]: 2026-01-31 08:18:16.330084289 +0000 UTC m=+0.147102790 container create ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:18:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:16 compute-0 systemd[1]: Started libpod-conmon-ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f.scope.
Jan 31 08:18:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8902c08a1f3434aafae48e55289e4d1a5c98a841591e3090a114b77cd2dcf38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8902c08a1f3434aafae48e55289e4d1a5c98a841591e3090a114b77cd2dcf38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8902c08a1f3434aafae48e55289e4d1a5c98a841591e3090a114b77cd2dcf38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8902c08a1f3434aafae48e55289e4d1a5c98a841591e3090a114b77cd2dcf38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:16 compute-0 podman[242112]: 2026-01-31 08:18:16.623925026 +0000 UTC m=+0.440943547 container init ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_khorana, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:18:16 compute-0 podman[242112]: 2026-01-31 08:18:16.633181849 +0000 UTC m=+0.450200350 container start ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_khorana, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:16 compute-0 podman[242112]: 2026-01-31 08:18:16.778667053 +0000 UTC m=+0.595685574 container attach ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:18:16 compute-0 ceph-mon[75294]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:17 compute-0 lvm[242206]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:18:17 compute-0 lvm[242206]: VG ceph_vg0 finished
Jan 31 08:18:17 compute-0 lvm[242209]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:18:17 compute-0 lvm[242209]: VG ceph_vg1 finished
Jan 31 08:18:17 compute-0 lvm[242211]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:18:17 compute-0 lvm[242211]: VG ceph_vg2 finished
Jan 31 08:18:17 compute-0 quizzical_khorana[242129]: {}
Jan 31 08:18:17 compute-0 podman[242112]: 2026-01-31 08:18:17.336559563 +0000 UTC m=+1.153578064 container died ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_khorana, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:18:17 compute-0 systemd[1]: libpod-ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f.scope: Deactivated successfully.
Jan 31 08:18:17 compute-0 systemd[1]: libpod-ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f.scope: Consumed 1.030s CPU time.
Jan 31 08:18:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8902c08a1f3434aafae48e55289e4d1a5c98a841591e3090a114b77cd2dcf38-merged.mount: Deactivated successfully.
Jan 31 08:18:18 compute-0 podman[242112]: 2026-01-31 08:18:18.756010288 +0000 UTC m=+2.573028819 container remove ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:18 compute-0 sudo[242016]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:18:18 compute-0 systemd[1]: libpod-conmon-ef5c8df1902fde206f26d7e9a80f5f3076446b02e83c2e6c20d61477f7c5943f.scope: Deactivated successfully.
Jan 31 08:18:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:18:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:19 compute-0 sudo[242226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:18:19 compute-0 sudo[242226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:19 compute-0 sudo[242226]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:19 compute-0 ceph-mon[75294]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:18:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:20 compute-0 ceph-mon[75294]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:21 compute-0 podman[242251]: 2026-01-31 08:18:21.219907414 +0000 UTC m=+0.085759884 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:18:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5851 writes, 24K keys, 5851 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5851 writes, 997 syncs, 5.87 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:18:23 compute-0 ceph-mon[75294]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:24 compute-0 ceph-mon[75294]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:18:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7214 writes, 29K keys, 7214 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7214 writes, 1459 syncs, 4.94 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:18:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:27 compute-0 ceph-mon[75294]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:29 compute-0 ceph-mon[75294]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:18:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 5788 writes, 24K keys, 5788 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5788 writes, 912 syncs, 6.35 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:18:30 compute-0 ceph-mon[75294]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:32 compute-0 ceph-mon[75294]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:35 compute-0 ceph-mon[75294]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:37 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Check health
Jan 31 08:18:37 compute-0 ceph-mon[75294]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:38 compute-0 ceph-mon[75294]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3632636647' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3632636647' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3632636647' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3632636647' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:18:40 compute-0 ceph-mon[75294]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:42 compute-0 ceph-mon[75294]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:44 compute-0 sshd-session[242277]: Invalid user solv from 193.32.162.145 port 33638
Jan 31 08:18:44 compute-0 ceph-mon[75294]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:45 compute-0 sshd-session[242277]: Connection closed by invalid user solv 193.32.162.145 port 33638 [preauth]
Jan 31 08:18:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:18:46.960 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:18:46.960 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:18:46.960 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:47 compute-0 ceph-mon[75294]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:47 compute-0 podman[242279]: 2026-01-31 08:18:47.1764234 +0000 UTC m=+0.047428359 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:18:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:49 compute-0 ceph-mon[75294]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:18:50
Jan 31 08:18:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:18:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:18:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'backups']
Jan 31 08:18:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:18:51 compute-0 ceph-mon[75294]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:52 compute-0 podman[242298]: 2026-01-31 08:18:52.202588509 +0000 UTC m=+0.074063575 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 08:18:53 compute-0 ceph-mon[75294]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.657 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.657 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.657 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.657 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:18:55 compute-0 ceph-mon[75294]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.767 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.767 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.768 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.768 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.768 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.768 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.887 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.887 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.888 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.888 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:18:55 compute-0 nova_compute[240062]: 2026-01-31 08:18:55.888 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:18:56 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/903455517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:18:56 compute-0 nova_compute[240062]: 2026-01-31 08:18:56.404 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:56 compute-0 nova_compute[240062]: 2026-01-31 08:18:56.568 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:18:56 compute-0 nova_compute[240062]: 2026-01-31 08:18:56.570 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:18:56 compute-0 nova_compute[240062]: 2026-01-31 08:18:56.570 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:56 compute-0 nova_compute[240062]: 2026-01-31 08:18:56.571 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:56 compute-0 ceph-mon[75294]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:56 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/903455517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:18:57 compute-0 nova_compute[240062]: 2026-01-31 08:18:57.107 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:18:57 compute-0 nova_compute[240062]: 2026-01-31 08:18:57.107 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:18:57 compute-0 nova_compute[240062]: 2026-01-31 08:18:57.123 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:18:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4185665620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:18:57 compute-0 nova_compute[240062]: 2026-01-31 08:18:57.661 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:57 compute-0 nova_compute[240062]: 2026-01-31 08:18:57.666 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:18:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:57 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4185665620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:18:58 compute-0 nova_compute[240062]: 2026-01-31 08:18:58.198 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:18:58 compute-0 nova_compute[240062]: 2026-01-31 08:18:58.199 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:18:58 compute-0 nova_compute[240062]: 2026-01-31 08:18:58.199 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:58 compute-0 nova_compute[240062]: 2026-01-31 08:18:58.586 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:58 compute-0 nova_compute[240062]: 2026-01-31 08:18:58.587 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:58 compute-0 nova_compute[240062]: 2026-01-31 08:18:58.587 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:59 compute-0 ceph-mon[75294]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:01 compute-0 ceph-mon[75294]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:03 compute-0 ceph-mon[75294]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:04 compute-0 ceph-mon[75294]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:19:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:07 compute-0 ceph-mon[75294]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:09 compute-0 ceph-mon[75294]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:11 compute-0 ceph-mon[75294]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.687005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847552687062, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1485, "num_deletes": 251, "total_data_size": 2402205, "memory_usage": 2443568, "flush_reason": "Manual Compaction"}
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847552775897, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2357796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15040, "largest_seqno": 16524, "table_properties": {"data_size": 2350885, "index_size": 3982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14146, "raw_average_key_size": 19, "raw_value_size": 2337089, "raw_average_value_size": 3250, "num_data_blocks": 182, "num_entries": 719, "num_filter_entries": 719, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847396, "oldest_key_time": 1769847396, "file_creation_time": 1769847552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 88948 microseconds, and 4601 cpu microseconds.
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.775954) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2357796 bytes OK
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.775976) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.814162) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.814221) EVENT_LOG_v1 {"time_micros": 1769847552814212, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.814248) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2395687, prev total WAL file size 2396842, number of live WAL files 2.
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.815083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2302KB)], [35(7503KB)]
Jan 31 08:19:12 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847552815121, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10040930, "oldest_snapshot_seqno": -1}
Jan 31 08:19:12 compute-0 ceph-mon[75294]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4110 keys, 8279810 bytes, temperature: kUnknown
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847553034131, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8279810, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8249665, "index_size": 18771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100348, "raw_average_key_size": 24, "raw_value_size": 8172724, "raw_average_value_size": 1988, "num_data_blocks": 793, "num_entries": 4110, "num_filter_entries": 4110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769847552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.034388) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8279810 bytes
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.048719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.8 rd, 37.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 4624, records dropped: 514 output_compression: NoCompression
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.048761) EVENT_LOG_v1 {"time_micros": 1769847553048744, "job": 16, "event": "compaction_finished", "compaction_time_micros": 219094, "compaction_time_cpu_micros": 13899, "output_level": 6, "num_output_files": 1, "total_output_size": 8279810, "num_input_records": 4624, "num_output_records": 4110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847553049229, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847553050139, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:12.814981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.050321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.050328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.050330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.050333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:19:13.050336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:14 compute-0 ceph-mon[75294]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:17 compute-0 ceph-mon[75294]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:18 compute-0 podman[242369]: 2026-01-31 08:19:18.212623452 +0000 UTC m=+0.081232102 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 08:19:19 compute-0 sudo[242387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:19 compute-0 sudo[242387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:19 compute-0 sudo[242387]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:19 compute-0 ceph-mon[75294]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:19 compute-0 sudo[242412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:19:19 compute-0 sudo[242412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:19 compute-0 sudo[242412]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:19:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:19:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:19:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:19:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:19:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:19:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:19:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:19:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:19:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:19:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:19:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:19:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:19 compute-0 sudo[242468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:19 compute-0 sudo[242468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:19 compute-0 sudo[242468]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:20 compute-0 sudo[242493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:19:20 compute-0 sudo[242493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:20 compute-0 podman[242530]: 2026-01-31 08:19:20.336502042 +0000 UTC m=+0.081110519 container create 84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_carson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:19:20 compute-0 podman[242530]: 2026-01-31 08:19:20.275897799 +0000 UTC m=+0.020506306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:20 compute-0 systemd[1]: Started libpod-conmon-84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b.scope.
Jan 31 08:19:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:20 compute-0 podman[242530]: 2026-01-31 08:19:20.624021844 +0000 UTC m=+0.368630331 container init 84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_carson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:19:20 compute-0 podman[242530]: 2026-01-31 08:19:20.63151099 +0000 UTC m=+0.376119507 container start 84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:19:20 compute-0 lucid_carson[242546]: 167 167
Jan 31 08:19:20 compute-0 systemd[1]: libpod-84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b.scope: Deactivated successfully.
Jan 31 08:19:20 compute-0 conmon[242546]: conmon 84437e201f2892c3aff5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b.scope/container/memory.events
Jan 31 08:19:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:19:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:19:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:19:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:19:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:19:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:20 compute-0 podman[242530]: 2026-01-31 08:19:20.8003805 +0000 UTC m=+0.544988997 container attach 84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:19:20 compute-0 podman[242530]: 2026-01-31 08:19:20.801378848 +0000 UTC m=+0.545987325 container died 84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-849cf9d53188dccf8ba80930c881a0e706a43b978f43cb928e9efb6fbc4c9a41-merged.mount: Deactivated successfully.
Jan 31 08:19:20 compute-0 podman[242530]: 2026-01-31 08:19:20.926258163 +0000 UTC m=+0.670866650 container remove 84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_carson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:19:21 compute-0 systemd[1]: libpod-conmon-84437e201f2892c3aff5d2f4eb781072388155eecabc4438519f3938a642bc1b.scope: Deactivated successfully.
Jan 31 08:19:21 compute-0 podman[242570]: 2026-01-31 08:19:21.072560189 +0000 UTC m=+0.046369400 container create 9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:19:21 compute-0 systemd[1]: Started libpod-conmon-9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977.scope.
Jan 31 08:19:21 compute-0 podman[242570]: 2026-01-31 08:19:21.045955845 +0000 UTC m=+0.019765056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd191fcae93b75f60d1c7b43e274a38aee2941fa360ab2dc487fe97cfcaf2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd191fcae93b75f60d1c7b43e274a38aee2941fa360ab2dc487fe97cfcaf2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd191fcae93b75f60d1c7b43e274a38aee2941fa360ab2dc487fe97cfcaf2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd191fcae93b75f60d1c7b43e274a38aee2941fa360ab2dc487fe97cfcaf2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cd191fcae93b75f60d1c7b43e274a38aee2941fa360ab2dc487fe97cfcaf2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:21 compute-0 podman[242570]: 2026-01-31 08:19:21.183029627 +0000 UTC m=+0.156838868 container init 9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:19:21 compute-0 podman[242570]: 2026-01-31 08:19:21.188446417 +0000 UTC m=+0.162255628 container start 9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:19:21 compute-0 podman[242570]: 2026-01-31 08:19:21.203954624 +0000 UTC m=+0.177763865 container attach 9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:19:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:21 compute-0 goofy_panini[242587]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:19:21 compute-0 goofy_panini[242587]: --> All data devices are unavailable
Jan 31 08:19:21 compute-0 systemd[1]: libpod-9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977.scope: Deactivated successfully.
Jan 31 08:19:21 compute-0 podman[242570]: 2026-01-31 08:19:21.585843481 +0000 UTC m=+0.559652692 container died 9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:19:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-02cd191fcae93b75f60d1c7b43e274a38aee2941fa360ab2dc487fe97cfcaf2e-merged.mount: Deactivated successfully.
Jan 31 08:19:21 compute-0 podman[242570]: 2026-01-31 08:19:21.727201012 +0000 UTC m=+0.701010223 container remove 9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:21 compute-0 systemd[1]: libpod-conmon-9d02d39c772bd5d1f147a025cc4d374ee25b09fb8d62c01af990fcbbf2089977.scope: Deactivated successfully.
Jan 31 08:19:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:21 compute-0 sudo[242493]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:21 compute-0 ceph-mon[75294]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:21 compute-0 sudo[242620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:21 compute-0 sudo[242620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:21 compute-0 sudo[242620]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:21 compute-0 sudo[242645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:19:21 compute-0 sudo[242645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:22 compute-0 podman[242681]: 2026-01-31 08:19:22.223860395 +0000 UTC m=+0.109663777 container create b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:19:22 compute-0 podman[242681]: 2026-01-31 08:19:22.134931681 +0000 UTC m=+0.020735083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:22 compute-0 systemd[1]: Started libpod-conmon-b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc.scope.
Jan 31 08:19:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:22 compute-0 podman[242681]: 2026-01-31 08:19:22.32332742 +0000 UTC m=+0.209130822 container init b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:22 compute-0 podman[242681]: 2026-01-31 08:19:22.329461109 +0000 UTC m=+0.215264491 container start b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:19:22 compute-0 serene_cerf[242710]: 167 167
Jan 31 08:19:22 compute-0 systemd[1]: libpod-b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc.scope: Deactivated successfully.
Jan 31 08:19:22 compute-0 podman[242681]: 2026-01-31 08:19:22.374148582 +0000 UTC m=+0.259951964 container attach b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:19:22 compute-0 podman[242681]: 2026-01-31 08:19:22.375475478 +0000 UTC m=+0.261278860 container died b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc4175b8cc2ecc966dd0e6cffa026898f3f0d800cae7560e102924ebb817ad23-merged.mount: Deactivated successfully.
Jan 31 08:19:22 compute-0 podman[242681]: 2026-01-31 08:19:22.671916728 +0000 UTC m=+0.557720110 container remove b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:19:22 compute-0 systemd[1]: libpod-conmon-b41d714d1ef4523a7f4689fbfb24981045de27f8e39e6ff844709ded0e7d74bc.scope: Deactivated successfully.
Jan 31 08:19:22 compute-0 podman[242695]: 2026-01-31 08:19:22.762019043 +0000 UTC m=+0.507281687 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:19:22 compute-0 podman[242749]: 2026-01-31 08:19:22.840847218 +0000 UTC m=+0.088741369 container create 6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:19:22 compute-0 podman[242749]: 2026-01-31 08:19:22.772170263 +0000 UTC m=+0.020064434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:22 compute-0 ceph-mon[75294]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:23 compute-0 systemd[1]: Started libpod-conmon-6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391.scope.
Jan 31 08:19:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b7b4489b748e2f87f7e975878d61f38ca8ed26fbe35f681d546ab42d8ba74e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b7b4489b748e2f87f7e975878d61f38ca8ed26fbe35f681d546ab42d8ba74e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b7b4489b748e2f87f7e975878d61f38ca8ed26fbe35f681d546ab42d8ba74e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b7b4489b748e2f87f7e975878d61f38ca8ed26fbe35f681d546ab42d8ba74e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:23 compute-0 podman[242749]: 2026-01-31 08:19:23.106154398 +0000 UTC m=+0.354048559 container init 6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:23 compute-0 podman[242749]: 2026-01-31 08:19:23.112047801 +0000 UTC m=+0.359941962 container start 6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:19:23 compute-0 podman[242749]: 2026-01-31 08:19:23.197534409 +0000 UTC m=+0.445428590 container attach 6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:19:23 compute-0 hopeful_ride[242766]: {
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:     "0": [
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:         {
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "devices": [
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "/dev/loop3"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             ],
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_name": "ceph_lv0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_size": "21470642176",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "name": "ceph_lv0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "tags": {
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cluster_name": "ceph",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.crush_device_class": "",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.encrypted": "0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.objectstore": "bluestore",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osd_id": "0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.type": "block",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.vdo": "0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.with_tpm": "0"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             },
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "type": "block",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "vg_name": "ceph_vg0"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:         }
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:     ],
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:     "1": [
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:         {
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "devices": [
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "/dev/loop4"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             ],
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_name": "ceph_lv1",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_size": "21470642176",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "name": "ceph_lv1",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "tags": {
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cluster_name": "ceph",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.crush_device_class": "",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.encrypted": "0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.objectstore": "bluestore",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osd_id": "1",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.type": "block",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.vdo": "0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.with_tpm": "0"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             },
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "type": "block",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "vg_name": "ceph_vg1"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:         }
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:     ],
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:     "2": [
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:         {
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "devices": [
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "/dev/loop5"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             ],
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_name": "ceph_lv2",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_size": "21470642176",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "name": "ceph_lv2",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "tags": {
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.cluster_name": "ceph",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.crush_device_class": "",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.encrypted": "0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.objectstore": "bluestore",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osd_id": "2",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.type": "block",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.vdo": "0",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:                 "ceph.with_tpm": "0"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             },
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "type": "block",
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:             "vg_name": "ceph_vg2"
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:         }
Jan 31 08:19:23 compute-0 hopeful_ride[242766]:     ]
Jan 31 08:19:23 compute-0 hopeful_ride[242766]: }
Jan 31 08:19:23 compute-0 systemd[1]: libpod-6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391.scope: Deactivated successfully.
Jan 31 08:19:23 compute-0 podman[242749]: 2026-01-31 08:19:23.415773061 +0000 UTC m=+0.663667212 container died 6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:19:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-33b7b4489b748e2f87f7e975878d61f38ca8ed26fbe35f681d546ab42d8ba74e-merged.mount: Deactivated successfully.
Jan 31 08:19:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:23 compute-0 podman[242749]: 2026-01-31 08:19:23.784289509 +0000 UTC m=+1.032183660 container remove 6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:19:23 compute-0 systemd[1]: libpod-conmon-6c2564e8ab1a8072def189034c713d7d547561521f4691ce5781fb7ea14b7391.scope: Deactivated successfully.
Jan 31 08:19:23 compute-0 sudo[242645]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:23 compute-0 sudo[242789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:23 compute-0 sudo[242789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:23 compute-0 sudo[242789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:23 compute-0 sudo[242814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:19:23 compute-0 sudo[242814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:24 compute-0 podman[242851]: 2026-01-31 08:19:24.22504941 +0000 UTC m=+0.060761018 container create 96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:19:24 compute-0 podman[242851]: 2026-01-31 08:19:24.183954166 +0000 UTC m=+0.019665804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:24 compute-0 systemd[1]: Started libpod-conmon-96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db.scope.
Jan 31 08:19:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:24 compute-0 podman[242851]: 2026-01-31 08:19:24.3754811 +0000 UTC m=+0.211192738 container init 96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:19:24 compute-0 podman[242851]: 2026-01-31 08:19:24.382030431 +0000 UTC m=+0.217742039 container start 96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:24 compute-0 youthful_mclean[242868]: 167 167
Jan 31 08:19:24 compute-0 systemd[1]: libpod-96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db.scope: Deactivated successfully.
Jan 31 08:19:24 compute-0 podman[242851]: 2026-01-31 08:19:24.415018422 +0000 UTC m=+0.250730050 container attach 96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclean, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:19:24 compute-0 podman[242851]: 2026-01-31 08:19:24.415440113 +0000 UTC m=+0.251151731 container died 96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-450b96bc1bfb9d05631da6f56c1bf2c3886b96857c024fa8d5ce5f5980f97514-merged.mount: Deactivated successfully.
Jan 31 08:19:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:24 compute-0 podman[242851]: 2026-01-31 08:19:24.832742287 +0000 UTC m=+0.668453895 container remove 96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:24 compute-0 systemd[1]: libpod-conmon-96af59c64436ed82fa927c224c6b8b06195e2f374573f62a1ad59828d392e4db.scope: Deactivated successfully.
Jan 31 08:19:25 compute-0 podman[242892]: 2026-01-31 08:19:25.013483504 +0000 UTC m=+0.099443655 container create 22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_benz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:19:25 compute-0 podman[242892]: 2026-01-31 08:19:24.933884568 +0000 UTC m=+0.019844749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:25 compute-0 systemd[1]: Started libpod-conmon-22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457.scope.
Jan 31 08:19:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1203092f55167589c73b77fb5894f3caa86c480cee0a99ffd7215c5b50a79de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1203092f55167589c73b77fb5894f3caa86c480cee0a99ffd7215c5b50a79de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1203092f55167589c73b77fb5894f3caa86c480cee0a99ffd7215c5b50a79de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1203092f55167589c73b77fb5894f3caa86c480cee0a99ffd7215c5b50a79de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:25 compute-0 ceph-mon[75294]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:25 compute-0 podman[242892]: 2026-01-31 08:19:25.318441418 +0000 UTC m=+0.404401589 container init 22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_benz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:19:25 compute-0 podman[242892]: 2026-01-31 08:19:25.325926865 +0000 UTC m=+0.411887016 container start 22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:19:25 compute-0 podman[242892]: 2026-01-31 08:19:25.353580698 +0000 UTC m=+0.439540879 container attach 22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_benz, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:19:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:25 compute-0 lvm[242986]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:19:25 compute-0 lvm[242987]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:19:25 compute-0 lvm[242986]: VG ceph_vg0 finished
Jan 31 08:19:25 compute-0 lvm[242987]: VG ceph_vg1 finished
Jan 31 08:19:26 compute-0 lvm[242989]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:19:26 compute-0 lvm[242989]: VG ceph_vg2 finished
Jan 31 08:19:26 compute-0 intelligent_benz[242908]: {}
Jan 31 08:19:26 compute-0 systemd[1]: libpod-22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457.scope: Deactivated successfully.
Jan 31 08:19:26 compute-0 systemd[1]: libpod-22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457.scope: Consumed 1.183s CPU time.
Jan 31 08:19:26 compute-0 podman[242892]: 2026-01-31 08:19:26.138276038 +0000 UTC m=+1.224236209 container died 22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 08:19:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1203092f55167589c73b77fb5894f3caa86c480cee0a99ffd7215c5b50a79de-merged.mount: Deactivated successfully.
Jan 31 08:19:26 compute-0 podman[242892]: 2026-01-31 08:19:26.373266752 +0000 UTC m=+1.459226903 container remove 22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:19:26 compute-0 systemd[1]: libpod-conmon-22ecaf650e2e9ccb13d569e727506c7492c286c59c2e5a082194076b04bde457.scope: Deactivated successfully.
Jan 31 08:19:26 compute-0 sudo[242814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:19:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:19:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:19:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:19:26 compute-0 sudo[243005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:19:26 compute-0 sudo[243005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:26 compute-0 sudo[243005]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:27 compute-0 ceph-mon[75294]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:19:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:19:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:29 compute-0 ceph-mon[75294]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:30 compute-0 ceph-mon[75294]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:32 compute-0 ceph-mon[75294]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:34 compute-0 ceph-mon[75294]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:37 compute-0 ceph-mon[75294]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:19:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/561275338' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:19:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:19:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/561275338' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:19:39 compute-0 ceph-mon[75294]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/561275338' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:19:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/561275338' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:19:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:40 compute-0 ceph-mon[75294]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:43 compute-0 ceph-mon[75294]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:45 compute-0 ceph-mon[75294]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:19:46.961 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:19:46.962 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:19:46.962 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:47 compute-0 ceph-mon[75294]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:48 compute-0 ceph-mon[75294]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:49 compute-0 podman[243030]: 2026-01-31 08:19:49.187340029 +0000 UTC m=+0.049426974 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:19:50
Jan 31 08:19:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:19:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:19:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data']
Jan 31 08:19:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:19:51 compute-0 ceph-mon[75294]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:52 compute-0 ceph-mon[75294]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:53 compute-0 podman[243051]: 2026-01-31 08:19:53.216415826 +0000 UTC m=+0.091531177 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:19:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:54 compute-0 nova_compute[240062]: 2026-01-31 08:19:54.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:54 compute-0 nova_compute[240062]: 2026-01-31 08:19:54.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:19:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:55 compute-0 nova_compute[240062]: 2026-01-31 08:19:55.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:55 compute-0 nova_compute[240062]: 2026-01-31 08:19:55.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:19:55 compute-0 nova_compute[240062]: 2026-01-31 08:19:55.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:19:55 compute-0 nova_compute[240062]: 2026-01-31 08:19:55.170 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:19:55 compute-0 nova_compute[240062]: 2026-01-31 08:19:55.171 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:55 compute-0 ceph-mon[75294]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:19:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:56 compute-0 nova_compute[240062]: 2026-01-31 08:19:56.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:56 compute-0 nova_compute[240062]: 2026-01-31 08:19:56.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:56 compute-0 nova_compute[240062]: 2026-01-31 08:19:56.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:56 compute-0 nova_compute[240062]: 2026-01-31 08:19:56.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.149 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.166 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.194 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.194 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.195 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.195 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.195 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:57 compute-0 ceph-mon[75294]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:19:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089759221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.705 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.830 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.831 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.831 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.832 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.887 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.888 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:19:57 compute-0 nova_compute[240062]: 2026-01-31 08:19:57.909 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:19:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579920297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:19:58 compute-0 nova_compute[240062]: 2026-01-31 08:19:58.409 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:58 compute-0 nova_compute[240062]: 2026-01-31 08:19:58.413 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:58 compute-0 nova_compute[240062]: 2026-01-31 08:19:58.427 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:58 compute-0 nova_compute[240062]: 2026-01-31 08:19:58.429 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:19:58 compute-0 nova_compute[240062]: 2026-01-31 08:19:58.429 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2089759221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:19:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1579920297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:19:59 compute-0 nova_compute[240062]: 2026-01-31 08:19:59.418 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:00 compute-0 ceph-mon[75294]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:01 compute-0 ceph-mon[75294]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:03 compute-0 ceph-mon[75294]: pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:05 compute-0 ceph-mon[75294]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:20:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:07 compute-0 ceph-mon[75294]: pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:08 compute-0 ceph-mon[75294]: pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:11 compute-0 ceph-mon[75294]: pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:12 compute-0 ceph-mon[75294]: pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:20:15 compute-0 ceph-mon[75294]: pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:20:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:20:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:16 compute-0 ceph-mon[75294]: pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 08:20:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:20:19 compute-0 ceph-mon[75294]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:20:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 08:20:20 compute-0 podman[243121]: 2026-01-31 08:20:20.164856413 +0000 UTC m=+0.036438486 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 08:20:20 compute-0 ceph-mon[75294]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 08:20:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 08:20:23 compute-0 ceph-mon[75294]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 08:20:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Jan 31 08:20:24 compute-0 podman[243141]: 2026-01-31 08:20:24.204144651 +0000 UTC m=+0.073268513 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:20:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:25 compute-0 ceph-mon[75294]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Jan 31 08:20:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 08:20:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:26 compute-0 sudo[243168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:26 compute-0 sudo[243168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:26 compute-0 sudo[243168]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:26 compute-0 sudo[243193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:20:26 compute-0 sudo[243193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:26 compute-0 ceph-mon[75294]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 08:20:27 compute-0 sudo[243193]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:20:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:20:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:20:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:20:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:20:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:20:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:20:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:20:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:20:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:20:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:20:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:20:27 compute-0 sudo[243249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:27 compute-0 sudo[243249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:27 compute-0 sudo[243249]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:27 compute-0 sudo[243274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:20:27 compute-0 sudo[243274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 31 08:20:27 compute-0 podman[243311]: 2026-01-31 08:20:27.726540637 +0000 UTC m=+0.023538100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:20:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:20:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:20:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:20:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:20:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:20:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:20:28 compute-0 podman[243311]: 2026-01-31 08:20:28.18618808 +0000 UTC m=+0.483185513 container create c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:20:28 compute-0 systemd[1]: Started libpod-conmon-c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee.scope.
Jan 31 08:20:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:28 compute-0 podman[243311]: 2026-01-31 08:20:28.358313989 +0000 UTC m=+0.655311442 container init c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:20:28 compute-0 podman[243311]: 2026-01-31 08:20:28.363096511 +0000 UTC m=+0.660093944 container start c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:20:28 compute-0 nervous_hertz[243327]: 167 167
Jan 31 08:20:28 compute-0 systemd[1]: libpod-c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee.scope: Deactivated successfully.
Jan 31 08:20:28 compute-0 podman[243311]: 2026-01-31 08:20:28.411154697 +0000 UTC m=+0.708152130 container attach c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hertz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:20:28 compute-0 podman[243311]: 2026-01-31 08:20:28.41161107 +0000 UTC m=+0.708608503 container died c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hertz, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:20:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-07ef76f9635d9796f5c4cc089480606023afba88ca62087ebb77991af6614648-merged.mount: Deactivated successfully.
Jan 31 08:20:29 compute-0 podman[243311]: 2026-01-31 08:20:29.423815707 +0000 UTC m=+1.720813140 container remove c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:20:29 compute-0 systemd[1]: libpod-conmon-c8b7abb56c9fc0b0526ebd495a77fb9b34070090ea3800069e60a84684feccee.scope: Deactivated successfully.
Jan 31 08:20:29 compute-0 podman[243352]: 2026-01-31 08:20:29.521818091 +0000 UTC m=+0.017892885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:20:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Jan 31 08:20:29 compute-0 ceph-mon[75294]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 31 08:20:29 compute-0 podman[243352]: 2026-01-31 08:20:29.94652622 +0000 UTC m=+0.442600994 container create 505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:20:30 compute-0 systemd[1]: Started libpod-conmon-505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f.scope.
Jan 31 08:20:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1910b6fd8420c0591d69e356ca11c6975afbe0e6d21105ddf55244af16e521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1910b6fd8420c0591d69e356ca11c6975afbe0e6d21105ddf55244af16e521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1910b6fd8420c0591d69e356ca11c6975afbe0e6d21105ddf55244af16e521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1910b6fd8420c0591d69e356ca11c6975afbe0e6d21105ddf55244af16e521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1910b6fd8420c0591d69e356ca11c6975afbe0e6d21105ddf55244af16e521/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:31 compute-0 podman[243352]: 2026-01-31 08:20:31.065436632 +0000 UTC m=+1.561511486 container init 505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:31 compute-0 podman[243352]: 2026-01-31 08:20:31.071263942 +0000 UTC m=+1.567338716 container start 505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:31 compute-0 ceph-mon[75294]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Jan 31 08:20:31 compute-0 podman[243352]: 2026-01-31 08:20:31.276092483 +0000 UTC m=+1.772167257 container attach 505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:20:31 compute-0 lucid_lewin[243369]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:20:31 compute-0 lucid_lewin[243369]: --> All data devices are unavailable
Jan 31 08:20:31 compute-0 systemd[1]: libpod-505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f.scope: Deactivated successfully.
Jan 31 08:20:31 compute-0 podman[243352]: 2026-01-31 08:20:31.447457772 +0000 UTC m=+1.943532536 container died 505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 08:20:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f1910b6fd8420c0591d69e356ca11c6975afbe0e6d21105ddf55244af16e521-merged.mount: Deactivated successfully.
Jan 31 08:20:32 compute-0 ceph-mon[75294]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 08:20:33 compute-0 podman[243352]: 2026-01-31 08:20:33.248105003 +0000 UTC m=+3.744179777 container remove 505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:20:33 compute-0 systemd[1]: libpod-conmon-505e141b3c96f5b1b6b06a1cb8d8f33b575c747ee5ac21a0b014b73ef0a28b6f.scope: Deactivated successfully.
Jan 31 08:20:33 compute-0 sudo[243274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:33 compute-0 sudo[243403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:33 compute-0 sudo[243403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:33 compute-0 sudo[243403]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:33 compute-0 sudo[243428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:20:33 compute-0 sudo[243428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:33 compute-0 podman[243464]: 2026-01-31 08:20:33.648033308 +0000 UTC m=+0.026474631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:20:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Jan 31 08:20:33 compute-0 podman[243464]: 2026-01-31 08:20:33.780912024 +0000 UTC m=+0.159353307 container create 02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:20:34 compute-0 systemd[1]: Started libpod-conmon-02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203.scope.
Jan 31 08:20:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:34 compute-0 podman[243464]: 2026-01-31 08:20:34.23742754 +0000 UTC m=+0.615868833 container init 02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:20:34 compute-0 podman[243464]: 2026-01-31 08:20:34.241811452 +0000 UTC m=+0.620252725 container start 02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lederberg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:20:34 compute-0 crazy_lederberg[243480]: 167 167
Jan 31 08:20:34 compute-0 systemd[1]: libpod-02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203.scope: Deactivated successfully.
Jan 31 08:20:34 compute-0 podman[243464]: 2026-01-31 08:20:34.282599107 +0000 UTC m=+0.661040420 container attach 02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:20:34 compute-0 podman[243464]: 2026-01-31 08:20:34.284302924 +0000 UTC m=+0.662744227 container died 02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lederberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4be13bafd3a8b00bfcd4a56d30f61eb03342f2e95edf4d5395a7a93955a069b8-merged.mount: Deactivated successfully.
Jan 31 08:20:35 compute-0 ceph-mon[75294]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Jan 31 08:20:35 compute-0 podman[243464]: 2026-01-31 08:20:35.70490457 +0000 UTC m=+2.083345843 container remove 02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:20:35 compute-0 systemd[1]: libpod-conmon-02e91d058f8ad8cb378548ccfd38201d930eac61b787e25e9b1c4e52c34cb203.scope: Deactivated successfully.
Jan 31 08:20:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Jan 31 08:20:35 compute-0 podman[243504]: 2026-01-31 08:20:35.803830339 +0000 UTC m=+0.016633631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:20:35 compute-0 podman[243504]: 2026-01-31 08:20:35.926043141 +0000 UTC m=+0.138846433 container create 1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:20:36 compute-0 systemd[1]: Started libpod-conmon-1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234.scope.
Jan 31 08:20:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3271a133efddf25b06d1e1b91b6fa5f9349858cb059d33f6a6d70fceda4b4149/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3271a133efddf25b06d1e1b91b6fa5f9349858cb059d33f6a6d70fceda4b4149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3271a133efddf25b06d1e1b91b6fa5f9349858cb059d33f6a6d70fceda4b4149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3271a133efddf25b06d1e1b91b6fa5f9349858cb059d33f6a6d70fceda4b4149/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:36 compute-0 podman[243504]: 2026-01-31 08:20:36.304006759 +0000 UTC m=+0.516810071 container init 1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:20:36 compute-0 podman[243504]: 2026-01-31 08:20:36.310479558 +0000 UTC m=+0.523282840 container start 1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:20:36 compute-0 busy_lichterman[243520]: {
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:     "0": [
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:         {
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "devices": [
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "/dev/loop3"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             ],
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_name": "ceph_lv0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_size": "21470642176",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "name": "ceph_lv0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "tags": {
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cluster_name": "ceph",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.crush_device_class": "",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.encrypted": "0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.objectstore": "bluestore",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osd_id": "0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.type": "block",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.vdo": "0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.with_tpm": "0"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             },
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "type": "block",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "vg_name": "ceph_vg0"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:         }
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:     ],
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:     "1": [
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:         {
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "devices": [
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "/dev/loop4"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             ],
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_name": "ceph_lv1",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_size": "21470642176",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "name": "ceph_lv1",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "tags": {
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cluster_name": "ceph",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.crush_device_class": "",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.encrypted": "0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.objectstore": "bluestore",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osd_id": "1",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.type": "block",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.vdo": "0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.with_tpm": "0"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             },
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "type": "block",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "vg_name": "ceph_vg1"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:         }
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:     ],
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:     "2": [
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:         {
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "devices": [
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "/dev/loop5"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             ],
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_name": "ceph_lv2",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_size": "21470642176",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "name": "ceph_lv2",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "tags": {
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.cluster_name": "ceph",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.crush_device_class": "",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.encrypted": "0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.objectstore": "bluestore",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osd_id": "2",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.type": "block",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.vdo": "0",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:                 "ceph.with_tpm": "0"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             },
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "type": "block",
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:             "vg_name": "ceph_vg2"
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:         }
Jan 31 08:20:36 compute-0 busy_lichterman[243520]:     ]
Jan 31 08:20:36 compute-0 busy_lichterman[243520]: }
Jan 31 08:20:36 compute-0 systemd[1]: libpod-1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234.scope: Deactivated successfully.
Jan 31 08:20:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:36 compute-0 podman[243504]: 2026-01-31 08:20:36.661395611 +0000 UTC m=+0.874198913 container attach 1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lichterman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:20:36 compute-0 podman[243504]: 2026-01-31 08:20:36.662225703 +0000 UTC m=+0.875028985 container died 1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lichterman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:20:37 compute-0 ceph-mon[75294]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Jan 31 08:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3271a133efddf25b06d1e1b91b6fa5f9349858cb059d33f6a6d70fceda4b4149-merged.mount: Deactivated successfully.
Jan 31 08:20:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Jan 31 08:20:38 compute-0 podman[243529]: 2026-01-31 08:20:38.470974149 +0000 UTC m=+1.874200472 container remove 1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:20:38 compute-0 systemd[1]: libpod-conmon-1a03b2ccbbf2c82adfae2e8899fe2241d692b8401d437db2497a5dac87d3e234.scope: Deactivated successfully.
Jan 31 08:20:38 compute-0 sudo[243428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:38 compute-0 sudo[243542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:38 compute-0 sudo[243542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:38 compute-0 sudo[243542]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:38 compute-0 sudo[243567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:20:38 compute-0 sudo[243567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:38 compute-0 podman[243605]: 2026-01-31 08:20:38.881063454 +0000 UTC m=+0.028116317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:20:39 compute-0 podman[243605]: 2026-01-31 08:20:39.168448153 +0000 UTC m=+0.315500996 container create 9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:20:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920125381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:20:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:20:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920125381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:20:39 compute-0 systemd[1]: Started libpod-conmon-9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987.scope.
Jan 31 08:20:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:39 compute-0 podman[243605]: 2026-01-31 08:20:39.738382978 +0000 UTC m=+0.885435841 container init 9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:20:39 compute-0 podman[243605]: 2026-01-31 08:20:39.74313763 +0000 UTC m=+0.890190513 container start 9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:20:39 compute-0 ceph-mon[75294]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Jan 31 08:20:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3920125381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:20:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3920125381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:20:39 compute-0 romantic_perlman[243622]: 167 167
Jan 31 08:20:39 compute-0 systemd[1]: libpod-9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987.scope: Deactivated successfully.
Jan 31 08:20:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 31 08:20:39 compute-0 podman[243605]: 2026-01-31 08:20:39.9439651 +0000 UTC m=+1.091017963 container attach 9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:20:39 compute-0 podman[243605]: 2026-01-31 08:20:39.944413792 +0000 UTC m=+1.091466655 container died 9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9d92457a7fb3e3a1bfdb4d96fbb8055c1e9d4cf510b19a1c3b86c3f2c54fdb-merged.mount: Deactivated successfully.
Jan 31 08:20:40 compute-0 podman[243605]: 2026-01-31 08:20:40.867645217 +0000 UTC m=+2.014698130 container remove 9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:20:40 compute-0 systemd[1]: libpod-conmon-9099010617eadafe9eb8ecf345e4735b3a88f30462139c0c6bc95f07f8658987.scope: Deactivated successfully.
Jan 31 08:20:40 compute-0 ceph-mon[75294]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 31 08:20:41 compute-0 podman[243646]: 2026-01-31 08:20:40.99645076 +0000 UTC m=+0.026938344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:20:41 compute-0 podman[243646]: 2026-01-31 08:20:41.411644705 +0000 UTC m=+0.442132239 container create ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noyce, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:20:41 compute-0 systemd[1]: Started libpod-conmon-ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647.scope.
Jan 31 08:20:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bbdf9536612f80e19b3353518f041e3e36855a6a3dc464291c9265128e13db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bbdf9536612f80e19b3353518f041e3e36855a6a3dc464291c9265128e13db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bbdf9536612f80e19b3353518f041e3e36855a6a3dc464291c9265128e13db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bbdf9536612f80e19b3353518f041e3e36855a6a3dc464291c9265128e13db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:41 compute-0 podman[243646]: 2026-01-31 08:20:41.686464118 +0000 UTC m=+0.716951652 container init ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noyce, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:20:41 compute-0 podman[243646]: 2026-01-31 08:20:41.695358183 +0000 UTC m=+0.725845677 container start ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noyce, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:20:41 compute-0 podman[243646]: 2026-01-31 08:20:41.761860649 +0000 UTC m=+0.792348183 container attach ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:20:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:20:42 compute-0 lvm[243742]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:20:42 compute-0 lvm[243742]: VG ceph_vg0 finished
Jan 31 08:20:42 compute-0 lvm[243743]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:20:42 compute-0 lvm[243743]: VG ceph_vg1 finished
Jan 31 08:20:42 compute-0 lvm[243745]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:20:42 compute-0 lvm[243745]: VG ceph_vg2 finished
Jan 31 08:20:42 compute-0 silly_noyce[243662]: {}
Jan 31 08:20:42 compute-0 podman[243646]: 2026-01-31 08:20:42.441472949 +0000 UTC m=+1.471960443 container died ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noyce, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:20:42 compute-0 systemd[1]: libpod-ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647.scope: Deactivated successfully.
Jan 31 08:20:42 compute-0 systemd[1]: libpod-ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647.scope: Consumed 1.012s CPU time.
Jan 31 08:20:42 compute-0 ceph-mon[75294]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-60bbdf9536612f80e19b3353518f041e3e36855a6a3dc464291c9265128e13db-merged.mount: Deactivated successfully.
Jan 31 08:20:43 compute-0 podman[243646]: 2026-01-31 08:20:43.740182563 +0000 UTC m=+2.770670057 container remove ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noyce, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:20:43 compute-0 sudo[243567]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:20:43 compute-0 systemd[1]: libpod-conmon-ac49ee279a28cc2645b96e5fb6557519e8fe0ecb4a1d260e3509e782757a1647.scope: Deactivated successfully.
Jan 31 08:20:43 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:20:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:20:44 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:20:44 compute-0 sudo[243760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:20:44 compute-0 sudo[243760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:44 compute-0 sudo[243760]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:45 compute-0 ceph-mon[75294]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:20:45 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:20:45 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:20:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:20:46.962 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:20:46.963 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:20:46.963 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:47 compute-0 ceph-mon[75294]: pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:49 compute-0 ceph-mon[75294]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:20:50
Jan 31 08:20:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:20:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:20:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', 'volumes']
Jan 31 08:20:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:20:50 compute-0 ceph-mon[75294]: pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:51 compute-0 podman[243785]: 2026-01-31 08:20:51.178384121 +0000 UTC m=+0.044804117 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 08:20:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:52 compute-0 nova_compute[240062]: 2026-01-31 08:20:52.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:52 compute-0 nova_compute[240062]: 2026-01-31 08:20:52.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:20:52 compute-0 nova_compute[240062]: 2026-01-31 08:20:52.174 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:20:52 compute-0 nova_compute[240062]: 2026-01-31 08:20:52.176 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:52 compute-0 nova_compute[240062]: 2026-01-31 08:20:52.176 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:20:52 compute-0 nova_compute[240062]: 2026-01-31 08:20:52.190 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:53 compute-0 ceph-mon[75294]: pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:54 compute-0 nova_compute[240062]: 2026-01-31 08:20:54.213 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:54 compute-0 nova_compute[240062]: 2026-01-31 08:20:54.213 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:20:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:55 compute-0 nova_compute[240062]: 2026-01-31 08:20:55.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:55 compute-0 nova_compute[240062]: 2026-01-31 08:20:55.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:20:55 compute-0 nova_compute[240062]: 2026-01-31 08:20:55.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:20:55 compute-0 podman[243805]: 2026-01-31 08:20:55.196697782 +0000 UTC m=+0.068048909 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:20:55 compute-0 nova_compute[240062]: 2026-01-31 08:20:55.277 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:20:55 compute-0 nova_compute[240062]: 2026-01-31 08:20:55.278 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:55 compute-0 ceph-mon[75294]: pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:20:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:56 compute-0 nova_compute[240062]: 2026-01-31 08:20:56.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:56 compute-0 nova_compute[240062]: 2026-01-31 08:20:56.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.207 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.207 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.207 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.207 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.208 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:57 compute-0 ceph-mon[75294]: pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:20:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939990524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.712 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.861 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.863 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.863 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:57 compute-0 nova_compute[240062]: 2026-01-31 08:20:57.864 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.234 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.235 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.337 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing inventories for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.421 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating ProviderTree inventory for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.422 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.443 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing aggregate associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.461 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing trait associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.478 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:58 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1939990524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:20:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:20:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133968200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.986 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:58 compute-0 nova_compute[240062]: 2026-01-31 08:20:58.991 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:20:59 compute-0 nova_compute[240062]: 2026-01-31 08:20:59.061 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:20:59 compute-0 nova_compute[240062]: 2026-01-31 08:20:59.062 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:20:59 compute-0 nova_compute[240062]: 2026-01-31 08:20:59.062 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:59 compute-0 ceph-mon[75294]: pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3133968200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:21:00 compute-0 nova_compute[240062]: 2026-01-31 08:21:00.063 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:00 compute-0 nova_compute[240062]: 2026-01-31 08:21:00.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:01 compute-0 ceph-mon[75294]: pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:03 compute-0 ceph-mon[75294]: pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:05 compute-0 ceph-mon[75294]: pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.254442737974552e-06 of space, bias 4.0, pg target 0.0027053312855694622 quantized to 16 (current 16)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:21:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:06 compute-0 ceph-mon[75294]: pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:09 compute-0 ceph-mon[75294]: pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:10 compute-0 ceph-mon[75294]: pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:13 compute-0 ceph-mon[75294]: pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:13 compute-0 sshd-session[243875]: Invalid user admin from 45.148.10.121 port 42280
Jan 31 08:21:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:13 compute-0 sshd-session[243875]: Connection closed by invalid user admin 45.148.10.121 port 42280 [preauth]
Jan 31 08:21:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 31 08:21:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 08:21:14 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-0 sshd-session[243877]: Invalid user ubuntu from 80.94.92.182 port 51248
Jan 31 08:21:15 compute-0 ceph-mon[75294]: pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:15 compute-0 ceph-mon[75294]: osdmap e138: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 31 08:21:15 compute-0 sshd-session[243877]: Connection closed by invalid user ubuntu 80.94.92.182 port 51248 [preauth]
Jan 31 08:21:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:16 compute-0 ceph-mon[75294]: osdmap e139: 3 total, 3 up, 3 in
Jan 31 08:21:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 2.0 MiB/s wr, 1 op/s
Jan 31 08:21:17 compute-0 ceph-mon[75294]: pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 31 08:21:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 08:21:18 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 31 08:21:19 compute-0 ceph-mon[75294]: pgmap v863: 305 pgs: 305 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 2.0 MiB/s wr, 1 op/s
Jan 31 08:21:19 compute-0 ceph-mon[75294]: osdmap e140: 3 total, 3 up, 3 in
Jan 31 08:21:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.4 MiB/s wr, 24 op/s
Jan 31 08:21:21 compute-0 ceph-mon[75294]: pgmap v865: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.4 MiB/s wr, 24 op/s
Jan 31 08:21:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 31 08:21:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.7 MiB/s wr, 19 op/s
Jan 31 08:21:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 08:21:22 compute-0 podman[243879]: 2026-01-31 08:21:22.186206452 +0000 UTC m=+0.048076537 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:21:22 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 31 08:21:23 compute-0 ceph-mon[75294]: pgmap v866: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.7 MiB/s wr, 19 op/s
Jan 31 08:21:23 compute-0 ceph-mon[75294]: osdmap e141: 3 total, 3 up, 3 in
Jan 31 08:21:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 31 08:21:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Jan 31 08:21:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 31 08:21:24 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 31 08:21:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:25 compute-0 ceph-mon[75294]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Jan 31 08:21:25 compute-0 ceph-mon[75294]: osdmap e142: 3 total, 3 up, 3 in
Jan 31 08:21:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Jan 31 08:21:26 compute-0 podman[243898]: 2026-01-31 08:21:26.195315498 +0000 UTC m=+0.066910116 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:21:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:27 compute-0 ceph-mon[75294]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Jan 31 08:21:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Jan 31 08:21:28 compute-0 ceph-mon[75294]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Jan 31 08:21:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.6 MiB/s wr, 29 op/s
Jan 31 08:21:31 compute-0 ceph-mon[75294]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.6 MiB/s wr, 29 op/s
Jan 31 08:21:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.2 MiB/s wr, 25 op/s
Jan 31 08:21:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:33 compute-0 ceph-mon[75294]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.2 MiB/s wr, 25 op/s
Jan 31 08:21:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 7 op/s
Jan 31 08:21:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 31 08:21:34 compute-0 ceph-mon[75294]: pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 7 op/s
Jan 31 08:21:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 31 08:21:35 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 31 08:21:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 7 op/s
Jan 31 08:21:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:36 compute-0 ceph-mon[75294]: osdmap e143: 3 total, 3 up, 3 in
Jan 31 08:21:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 409 B/s wr, 8 op/s
Jan 31 08:21:38 compute-0 ceph-mon[75294]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 7 op/s
Jan 31 08:21:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:21:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1582583898' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:21:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:21:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1582583898' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:21:39 compute-0 ceph-mon[75294]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 409 B/s wr, 8 op/s
Jan 31 08:21:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Jan 31 08:21:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1582583898' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:21:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1582583898' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:21:41 compute-0 ceph-mon[75294]: pgmap v878: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Jan 31 08:21:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Jan 31 08:21:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 31 08:21:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 31 08:21:42 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 31 08:21:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 31 08:21:43 compute-0 ceph-mon[75294]: pgmap v879: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Jan 31 08:21:43 compute-0 ceph-mon[75294]: osdmap e144: 3 total, 3 up, 3 in
Jan 31 08:21:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 31 08:21:43 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 31 08:21:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.5 KiB/s wr, 38 op/s
Jan 31 08:21:44 compute-0 sudo[243924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:44 compute-0 sudo[243924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:44 compute-0 sudo[243924]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:44 compute-0 sudo[243949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:21:44 compute-0 sudo[243949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:44 compute-0 ceph-mon[75294]: osdmap e145: 3 total, 3 up, 3 in
Jan 31 08:21:44 compute-0 ceph-mon[75294]: pgmap v882: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.5 KiB/s wr, 38 op/s
Jan 31 08:21:44 compute-0 podman[244017]: 2026-01-31 08:21:44.947957055 +0000 UTC m=+0.246720978 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:21:45 compute-0 podman[244017]: 2026-01-31 08:21:45.034514033 +0000 UTC m=+0.333277936 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:21:45 compute-0 sudo[243949]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:21:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.5 KiB/s wr, 37 op/s
Jan 31 08:21:45 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:21:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:46 compute-0 sudo[244204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:46 compute-0 sudo[244204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:46 compute-0 sudo[244204]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:46 compute-0 sudo[244229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:21:46 compute-0 sudo[244229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:46 compute-0 sudo[244229]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:21:46 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:21:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:21:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:21:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:21:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:21:46 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:21:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:21:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:21:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:21:46 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:21:46 compute-0 sudo[244286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:46 compute-0 sudo[244286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:46 compute-0 sudo[244286]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:46 compute-0 sudo[244311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:21:46 compute-0 sudo[244311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:21:46.963 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:21:46.964 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:21:46.964 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:47 compute-0 ceph-mon[75294]: pgmap v883: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.5 KiB/s wr, 37 op/s
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:21:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:21:47 compute-0 podman[244348]: 2026-01-31 08:21:47.061395456 +0000 UTC m=+0.017144813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:47 compute-0 podman[244348]: 2026-01-31 08:21:47.212877526 +0000 UTC m=+0.168626853 container create 38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_ritchie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:21:47 compute-0 systemd[1]: Started libpod-conmon-38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd.scope.
Jan 31 08:21:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:47 compute-0 podman[244348]: 2026-01-31 08:21:47.500471472 +0000 UTC m=+0.456220819 container init 38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:21:47 compute-0 podman[244348]: 2026-01-31 08:21:47.507082634 +0000 UTC m=+0.462831971 container start 38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_ritchie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:21:47 compute-0 laughing_ritchie[244364]: 167 167
Jan 31 08:21:47 compute-0 systemd[1]: libpod-38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd.scope: Deactivated successfully.
Jan 31 08:21:47 compute-0 podman[244348]: 2026-01-31 08:21:47.707289098 +0000 UTC m=+0.663038455 container attach 38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:21:47 compute-0 podman[244348]: 2026-01-31 08:21:47.707892275 +0000 UTC m=+0.663641612 container died 38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 13 MiB data, 150 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1023 B/s wr, 13 op/s
Jan 31 08:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec701a21a3337f83974b3b9a9cab3716bff11cfb7768e14062af7f047caf3553-merged.mount: Deactivated successfully.
Jan 31 08:21:48 compute-0 podman[244348]: 2026-01-31 08:21:48.575247735 +0000 UTC m=+1.530997082 container remove 38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:21:48 compute-0 systemd[1]: libpod-conmon-38dfc367b25f7ea63b5e04b56ee1b0be7f953a6279297de562dc0e48860900dd.scope: Deactivated successfully.
Jan 31 08:21:48 compute-0 podman[244389]: 2026-01-31 08:21:48.669177807 +0000 UTC m=+0.020576768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:48 compute-0 podman[244389]: 2026-01-31 08:21:48.815903526 +0000 UTC m=+0.167302477 container create 336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:49 compute-0 systemd[1]: Started libpod-conmon-336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578.scope.
Jan 31 08:21:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78735d438bd55fdcee8bff5e90a5002f55c607a03bf8dfb295460800b2ab58aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78735d438bd55fdcee8bff5e90a5002f55c607a03bf8dfb295460800b2ab58aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78735d438bd55fdcee8bff5e90a5002f55c607a03bf8dfb295460800b2ab58aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78735d438bd55fdcee8bff5e90a5002f55c607a03bf8dfb295460800b2ab58aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78735d438bd55fdcee8bff5e90a5002f55c607a03bf8dfb295460800b2ab58aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:49 compute-0 podman[244389]: 2026-01-31 08:21:49.286436418 +0000 UTC m=+0.637835409 container init 336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_margulis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:21:49 compute-0 podman[244389]: 2026-01-31 08:21:49.29264027 +0000 UTC m=+0.644039241 container start 336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:21:49 compute-0 podman[244389]: 2026-01-31 08:21:49.467474024 +0000 UTC m=+0.818873055 container attach 336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:21:49 compute-0 ceph-mon[75294]: pgmap v884: 305 pgs: 305 active+clean; 13 MiB data, 150 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1023 B/s wr, 13 op/s
Jan 31 08:21:49 compute-0 elastic_margulis[244405]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:21:49 compute-0 elastic_margulis[244405]: --> All data devices are unavailable
Jan 31 08:21:49 compute-0 systemd[1]: libpod-336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578.scope: Deactivated successfully.
Jan 31 08:21:49 compute-0 podman[244389]: 2026-01-31 08:21:49.721988616 +0000 UTC m=+1.073387597 container died 336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_margulis, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:21:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Jan 31 08:21:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-78735d438bd55fdcee8bff5e90a5002f55c607a03bf8dfb295460800b2ab58aa-merged.mount: Deactivated successfully.
Jan 31 08:21:50 compute-0 podman[244389]: 2026-01-31 08:21:50.690252381 +0000 UTC m=+2.041651322 container remove 336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:21:50 compute-0 sudo[244311]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:50 compute-0 sudo[244437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:50 compute-0 sudo[244437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:50 compute-0 sudo[244437]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:50 compute-0 systemd[1]: libpod-conmon-336969d287797b8c301f7712db80ffe84fec16913502e6fb629fe8400ddea578.scope: Deactivated successfully.
Jan 31 08:21:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:21:50
Jan 31 08:21:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:21:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:21:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log']
Jan 31 08:21:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:21:50 compute-0 sudo[244462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:21:50 compute-0 sudo[244462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:50 compute-0 ceph-mon[75294]: pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Jan 31 08:21:51 compute-0 podman[244497]: 2026-01-31 08:21:51.132118524 +0000 UTC m=+0.083094135 container create e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:51 compute-0 podman[244497]: 2026-01-31 08:21:51.06783691 +0000 UTC m=+0.018812531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:51 compute-0 systemd[1]: Started libpod-conmon-e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8.scope.
Jan 31 08:21:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:51 compute-0 podman[244497]: 2026-01-31 08:21:51.329419496 +0000 UTC m=+0.280395177 container init e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:21:51 compute-0 podman[244497]: 2026-01-31 08:21:51.334907138 +0000 UTC m=+0.285882759 container start e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:21:51 compute-0 dreamy_mccarthy[244514]: 167 167
Jan 31 08:21:51 compute-0 systemd[1]: libpod-e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8.scope: Deactivated successfully.
Jan 31 08:21:51 compute-0 conmon[244514]: conmon e54b8e4a549cc2630464 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8.scope/container/memory.events
Jan 31 08:21:51 compute-0 podman[244497]: 2026-01-31 08:21:51.483985822 +0000 UTC m=+0.434961453 container attach e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mccarthy, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:21:51 compute-0 podman[244497]: 2026-01-31 08:21:51.484950559 +0000 UTC m=+0.435926200 container died e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e636b46e2974f8e0c02e609d7e794e072434d3fd3561c53cc4f21540cab45654-merged.mount: Deactivated successfully.
Jan 31 08:21:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Jan 31 08:21:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 31 08:21:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 31 08:21:52 compute-0 podman[244497]: 2026-01-31 08:21:52.500731095 +0000 UTC m=+1.451706686 container remove e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:52 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 31 08:21:52 compute-0 systemd[1]: libpod-conmon-e54b8e4a549cc263046464ee9066cd7f8a6db2b6c15df9d08d97438d967b4ec8.scope: Deactivated successfully.
Jan 31 08:21:52 compute-0 podman[244537]: 2026-01-31 08:21:52.678106909 +0000 UTC m=+0.093513862 container create f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:21:52 compute-0 podman[244537]: 2026-01-31 08:21:52.607177712 +0000 UTC m=+0.022584695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:52 compute-0 systemd[1]: Started libpod-conmon-f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3.scope.
Jan 31 08:21:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac02cab4342b920826b45a9400bcc9f73de3671cbda0e1960e44aaef45c3a64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac02cab4342b920826b45a9400bcc9f73de3671cbda0e1960e44aaef45c3a64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac02cab4342b920826b45a9400bcc9f73de3671cbda0e1960e44aaef45c3a64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac02cab4342b920826b45a9400bcc9f73de3671cbda0e1960e44aaef45c3a64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:52 compute-0 podman[244548]: 2026-01-31 08:21:52.787941749 +0000 UTC m=+0.155345337 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:21:52 compute-0 podman[244537]: 2026-01-31 08:21:52.857623862 +0000 UTC m=+0.273030825 container init f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:21:52 compute-0 podman[244537]: 2026-01-31 08:21:52.863118073 +0000 UTC m=+0.278525026 container start f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:52 compute-0 podman[244537]: 2026-01-31 08:21:52.982343483 +0000 UTC m=+0.397750456 container attach f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:21:53 compute-0 condescending_curran[244574]: {
Jan 31 08:21:53 compute-0 condescending_curran[244574]:     "0": [
Jan 31 08:21:53 compute-0 condescending_curran[244574]:         {
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "devices": [
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "/dev/loop3"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             ],
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_name": "ceph_lv0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_size": "21470642176",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "name": "ceph_lv0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "tags": {
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cluster_name": "ceph",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.crush_device_class": "",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.encrypted": "0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.objectstore": "bluestore",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osd_id": "0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.type": "block",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.vdo": "0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.with_tpm": "0"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             },
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "type": "block",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "vg_name": "ceph_vg0"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:         }
Jan 31 08:21:53 compute-0 condescending_curran[244574]:     ],
Jan 31 08:21:53 compute-0 condescending_curran[244574]:     "1": [
Jan 31 08:21:53 compute-0 condescending_curran[244574]:         {
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "devices": [
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "/dev/loop4"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             ],
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_name": "ceph_lv1",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_size": "21470642176",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "name": "ceph_lv1",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "tags": {
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cluster_name": "ceph",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.crush_device_class": "",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.encrypted": "0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.objectstore": "bluestore",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osd_id": "1",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.type": "block",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.vdo": "0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.with_tpm": "0"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             },
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "type": "block",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "vg_name": "ceph_vg1"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:         }
Jan 31 08:21:53 compute-0 condescending_curran[244574]:     ],
Jan 31 08:21:53 compute-0 condescending_curran[244574]:     "2": [
Jan 31 08:21:53 compute-0 condescending_curran[244574]:         {
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "devices": [
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "/dev/loop5"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             ],
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_name": "ceph_lv2",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_size": "21470642176",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "name": "ceph_lv2",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "tags": {
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.cluster_name": "ceph",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.crush_device_class": "",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.encrypted": "0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.objectstore": "bluestore",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osd_id": "2",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.type": "block",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.vdo": "0",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:                 "ceph.with_tpm": "0"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             },
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "type": "block",
Jan 31 08:21:53 compute-0 condescending_curran[244574]:             "vg_name": "ceph_vg2"
Jan 31 08:21:53 compute-0 condescending_curran[244574]:         }
Jan 31 08:21:53 compute-0 condescending_curran[244574]:     ]
Jan 31 08:21:53 compute-0 condescending_curran[244574]: }
Jan 31 08:21:53 compute-0 systemd[1]: libpod-f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3.scope: Deactivated successfully.
Jan 31 08:21:53 compute-0 podman[244583]: 2026-01-31 08:21:53.15401801 +0000 UTC m=+0.024839037 container died f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curran, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ac02cab4342b920826b45a9400bcc9f73de3671cbda0e1960e44aaef45c3a64-merged.mount: Deactivated successfully.
Jan 31 08:21:53 compute-0 ceph-mon[75294]: pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Jan 31 08:21:53 compute-0 ceph-mon[75294]: osdmap e146: 3 total, 3 up, 3 in
Jan 31 08:21:53 compute-0 podman[244583]: 2026-01-31 08:21:53.701727131 +0000 UTC m=+0.572548138 container remove f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:21:53 compute-0 systemd[1]: libpod-conmon-f0959a0e2e4543b39f42b800f89005262c59bae2a360c18c2351ace1ad54e2e3.scope: Deactivated successfully.
Jan 31 08:21:53 compute-0 sudo[244462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:53 compute-0 sudo[244598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:53 compute-0 sudo[244598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:53 compute-0 sudo[244598]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 31 08:21:53 compute-0 sudo[244623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:21:53 compute-0 sudo[244623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:54 compute-0 podman[244659]: 2026-01-31 08:21:54.074971379 +0000 UTC m=+0.026616595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:54 compute-0 podman[244659]: 2026-01-31 08:21:54.195022392 +0000 UTC m=+0.146667578 container create 1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:21:54 compute-0 systemd[1]: Started libpod-conmon-1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1.scope.
Jan 31 08:21:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:54 compute-0 podman[244659]: 2026-01-31 08:21:54.3863381 +0000 UTC m=+0.337983306 container init 1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:21:54 compute-0 podman[244659]: 2026-01-31 08:21:54.39286726 +0000 UTC m=+0.344512446 container start 1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:21:54 compute-0 cool_vaughan[244675]: 167 167
Jan 31 08:21:54 compute-0 systemd[1]: libpod-1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1.scope: Deactivated successfully.
Jan 31 08:21:54 compute-0 podman[244659]: 2026-01-31 08:21:54.490694329 +0000 UTC m=+0.442339535 container attach 1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:21:54 compute-0 podman[244659]: 2026-01-31 08:21:54.491164003 +0000 UTC m=+0.442809209 container died 1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:54 compute-0 ceph-mon[75294]: pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 31 08:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-764fa0cec003d4c375146f41e3b4ede908519da158ab1c0e0e7fd8c6ba3badde-merged.mount: Deactivated successfully.
Jan 31 08:21:55 compute-0 podman[244659]: 2026-01-31 08:21:55.150040062 +0000 UTC m=+1.101685268 container remove 1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:21:55 compute-0 nova_compute[240062]: 2026-01-31 08:21:55.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:55 compute-0 nova_compute[240062]: 2026-01-31 08:21:55.157 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:21:55 compute-0 systemd[1]: libpod-conmon-1aee6cfb6827105b1691ff0348ed981422f6cb82e79bcba313858474ae9e0db1.scope: Deactivated successfully.
Jan 31 08:21:55 compute-0 podman[244700]: 2026-01-31 08:21:55.251427799 +0000 UTC m=+0.018255044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:55 compute-0 podman[244700]: 2026-01-31 08:21:55.377430846 +0000 UTC m=+0.144258071 container create 9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:21:55 compute-0 systemd[1]: Started libpod-conmon-9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb.scope.
Jan 31 08:21:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a32bfa6980eb27b09a5bdc405a320e0555cf7540cc4e9bd400d11f39336be2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a32bfa6980eb27b09a5bdc405a320e0555cf7540cc4e9bd400d11f39336be2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a32bfa6980eb27b09a5bdc405a320e0555cf7540cc4e9bd400d11f39336be2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a32bfa6980eb27b09a5bdc405a320e0555cf7540cc4e9bd400d11f39336be2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:21:55 compute-0 podman[244700]: 2026-01-31 08:21:55.710829385 +0000 UTC m=+0.477656640 container init 9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:21:55 compute-0 podman[244700]: 2026-01-31 08:21:55.717327034 +0000 UTC m=+0.484154259 container start 9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:21:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 31 08:21:55 compute-0 podman[244700]: 2026-01-31 08:21:55.884364663 +0000 UTC m=+0.651191918 container attach 9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:56 compute-0 lvm[244802]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:21:56 compute-0 lvm[244801]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:21:56 compute-0 lvm[244801]: VG ceph_vg0 finished
Jan 31 08:21:56 compute-0 lvm[244802]: VG ceph_vg1 finished
Jan 31 08:21:56 compute-0 lvm[244808]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:21:56 compute-0 lvm[244808]: VG ceph_vg2 finished
Jan 31 08:21:56 compute-0 podman[244792]: 2026-01-31 08:21:56.407213919 +0000 UTC m=+0.074609920 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:21:56 compute-0 friendly_einstein[244717]: {}
Jan 31 08:21:56 compute-0 systemd[1]: libpod-9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb.scope: Deactivated successfully.
Jan 31 08:21:56 compute-0 systemd[1]: libpod-9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb.scope: Consumed 1.116s CPU time.
Jan 31 08:21:56 compute-0 podman[244700]: 2026-01-31 08:21:56.52360605 +0000 UTC m=+1.290433305 container died 9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a32bfa6980eb27b09a5bdc405a320e0555cf7540cc4e9bd400d11f39336be2f-merged.mount: Deactivated successfully.
Jan 31 08:21:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.174 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.175 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.175 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.175 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.204 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.204 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.204 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.204 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.205 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:57 compute-0 ceph-mon[75294]: pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 31 08:21:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Jan 31 08:21:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:21:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2243901178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:21:57 compute-0 podman[244700]: 2026-01-31 08:21:57.824532054 +0000 UTC m=+2.591359319 container remove 9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.834 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:57 compute-0 sudo[244623]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:21:57 compute-0 systemd[1]: libpod-conmon-9a4af7e5b0c1b9c3a05b20241b6e0287ebab5009053d50bc27c2e09d003325fb.scope: Deactivated successfully.
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.957 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.959 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5097MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.959 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:57 compute-0 nova_compute[240062]: 2026-01-31 08:21:57.959 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:57 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.023 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.023 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.037 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:58 compute-0 sudo[244880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:21:58 compute-0 sudo[244880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:58 compute-0 sudo[244880]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:21:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567982857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.693 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.697 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.743 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.745 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:21:58 compute-0 nova_compute[240062]: 2026-01-31 08:21:58.745 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:58 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2243901178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:21:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:58 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:21:58 compute-0 sshd-session[244905]: Invalid user solv from 193.32.162.145 port 34404
Jan 31 08:21:58 compute-0 sshd-session[244905]: Connection closed by invalid user solv 193.32.162.145 port 34404 [preauth]
Jan 31 08:21:59 compute-0 nova_compute[240062]: 2026-01-31 08:21:59.740 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:59 compute-0 nova_compute[240062]: 2026-01-31 08:21:59.740 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:59 compute-0 ceph-mon[75294]: pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Jan 31 08:21:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1567982857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:22:00 compute-0 nova_compute[240062]: 2026-01-31 08:22:00.096 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:00 compute-0 nova_compute[240062]: 2026-01-31 08:22:00.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:00 compute-0 nova_compute[240062]: 2026-01-31 08:22:00.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:01 compute-0 ceph-mon[75294]: pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:03 compute-0 ceph-mon[75294]: pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.496157195641518e-07 of space, bias 1.0, pg target 0.00019488471586924554 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.070672423406264e-06 of space, bias 4.0, pg target 0.002484806908087517 quantized to 16 (current 16)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:22:06 compute-0 ceph-mon[75294]: pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:07 compute-0 ceph-mon[75294]: pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:10 compute-0 ceph-mon[75294]: pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:11 compute-0 ceph-mon[75294]: pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:13 compute-0 ceph-mon[75294]: pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:14 compute-0 ceph-mon[75294]: pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:17 compute-0 ceph-mon[75294]: pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:18 compute-0 ceph-mon[75294]: pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:21 compute-0 ceph-mon[75294]: pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:23 compute-0 podman[244909]: 2026-01-31 08:22:23.173498949 +0000 UTC m=+0.042961337 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 08:22:23 compute-0 ceph-mon[75294]: pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:25 compute-0 ceph-mon[75294]: pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:27 compute-0 ceph-mon[75294]: pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:27 compute-0 podman[244928]: 2026-01-31 08:22:27.186315887 +0000 UTC m=+0.059874304 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 08:22:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:29 compute-0 ceph-mon[75294]: pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:31 compute-0 ceph-mon[75294]: pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:33 compute-0 ceph-mon[75294]: pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:35 compute-0 ceph-mon[75294]: pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:37 compute-0 ceph-mon[75294]: pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:38 compute-0 ceph-mon[75294]: pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:22:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2574042625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:22:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:22:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2574042625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:22:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2574042625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:22:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2574042625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:22:41 compute-0 ceph-mon[75294]: pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:43 compute-0 ceph-mon[75294]: pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:45 compute-0 ceph-mon[75294]: pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:22:46.964 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:22:46.964 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:22:46.964 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:47 compute-0 ceph-mon[75294]: pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:49 compute-0 ceph-mon[75294]: pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:22:50
Jan 31 08:22:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:22:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:22:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 31 08:22:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:22:50 compute-0 ceph-mon[75294]: pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:53 compute-0 ceph-mon[75294]: pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.077044) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847774077077, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2089, "num_deletes": 253, "total_data_size": 3503353, "memory_usage": 3562224, "flush_reason": "Manual Compaction"}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847774173465, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3436084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16525, "largest_seqno": 18613, "table_properties": {"data_size": 3426556, "index_size": 6086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18971, "raw_average_key_size": 20, "raw_value_size": 3407511, "raw_average_value_size": 3598, "num_data_blocks": 274, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847552, "oldest_key_time": 1769847552, "file_creation_time": 1769847774, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 96479 microseconds, and 4810 cpu microseconds.
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:22:54 compute-0 podman[244955]: 2026-01-31 08:22:54.199733965 +0000 UTC m=+0.070036633 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.173518) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3436084 bytes OK
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.173540) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.204992) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.205068) EVENT_LOG_v1 {"time_micros": 1769847774205056, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.205102) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3494609, prev total WAL file size 3494609, number of live WAL files 2.
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.205969) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3355KB)], [38(8085KB)]
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847774206025, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11715894, "oldest_snapshot_seqno": -1}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4537 keys, 9922660 bytes, temperature: kUnknown
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847774458084, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9922660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9888047, "index_size": 22151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 109690, "raw_average_key_size": 24, "raw_value_size": 9801868, "raw_average_value_size": 2160, "num_data_blocks": 938, "num_entries": 4537, "num_filter_entries": 4537, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769847774, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.458319) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9922660 bytes
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.470059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.5 rd, 39.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5057, records dropped: 520 output_compression: NoCompression
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.470088) EVENT_LOG_v1 {"time_micros": 1769847774470077, "job": 18, "event": "compaction_finished", "compaction_time_micros": 252127, "compaction_time_cpu_micros": 14740, "output_level": 6, "num_output_files": 1, "total_output_size": 9922660, "num_input_records": 5057, "num_output_records": 4537, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847774470475, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847774471355, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.205889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.471413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.471417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.471419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.471421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:22:54.471423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:55 compute-0 ceph-mon[75294]: pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:22:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:57 compute-0 ceph-mon[75294]: pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.182 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.182 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.182 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.183 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.183 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:22:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950110900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.698 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.833 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.835 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.835 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.835 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.922 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.923 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:22:57 compute-0 nova_compute[240062]: 2026-01-31 08:22:57.960 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:58 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2950110900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:22:58 compute-0 podman[245016]: 2026-01-31 08:22:58.187377979 +0000 UTC m=+0.058763542 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:22:58 compute-0 sudo[245043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:22:58 compute-0 sudo[245043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:58 compute-0 sudo[245043]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:58 compute-0 sudo[245068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:22:58 compute-0 sudo[245068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438747502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:22:58 compute-0 nova_compute[240062]: 2026-01-31 08:22:58.505 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:58 compute-0 nova_compute[240062]: 2026-01-31 08:22:58.511 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:22:58 compute-0 nova_compute[240062]: 2026-01-31 08:22:58.531 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:22:58 compute-0 nova_compute[240062]: 2026-01-31 08:22:58.532 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:22:58 compute-0 nova_compute[240062]: 2026-01-31 08:22:58.533 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:58 compute-0 sudo[245068]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:22:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:22:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:22:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:22:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:22:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:58 compute-0 sudo[245125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:22:58 compute-0 sudo[245125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:58 compute-0 sudo[245125]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:58 compute-0 sudo[245150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:22:58 compute-0 sudo[245150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:59 compute-0 podman[245187]: 2026-01-31 08:22:59.137501495 +0000 UTC m=+0.052747617 container create 521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:22:59 compute-0 ceph-mon[75294]: pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1438747502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:22:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:22:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:22:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:22:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:22:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:59 compute-0 systemd[1]: Started libpod-conmon-521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014.scope.
Jan 31 08:22:59 compute-0 podman[245187]: 2026-01-31 08:22:59.10359108 +0000 UTC m=+0.018837222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:59 compute-0 podman[245187]: 2026-01-31 08:22:59.241434072 +0000 UTC m=+0.156680254 container init 521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:59 compute-0 podman[245187]: 2026-01-31 08:22:59.246342108 +0000 UTC m=+0.161588240 container start 521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:22:59 compute-0 goofy_goldwasser[245203]: 167 167
Jan 31 08:22:59 compute-0 systemd[1]: libpod-521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014.scope: Deactivated successfully.
Jan 31 08:22:59 compute-0 podman[245187]: 2026-01-31 08:22:59.260239191 +0000 UTC m=+0.175485313 container attach 521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:22:59 compute-0 podman[245187]: 2026-01-31 08:22:59.260531029 +0000 UTC m=+0.175777151 container died 521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7375fc065d783754fec3b1ad3e2825bbe1a8a9489bbaafe274916787c610ea4-merged.mount: Deactivated successfully.
Jan 31 08:22:59 compute-0 podman[245187]: 2026-01-31 08:22:59.34608615 +0000 UTC m=+0.261332272 container remove 521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:22:59 compute-0 systemd[1]: libpod-conmon-521e06c9993be223bc92175f68c829911a852526d7bd991f55cd8aa1c82ce014.scope: Deactivated successfully.
Jan 31 08:22:59 compute-0 podman[245227]: 2026-01-31 08:22:59.46824255 +0000 UTC m=+0.050132334 container create 004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bell, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:22:59 compute-0 systemd[1]: Started libpod-conmon-004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34.scope.
Jan 31 08:22:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81d9c487e504e7d106b9b717e93b17527aece68ef13f369ee4ef7d738db52b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81d9c487e504e7d106b9b717e93b17527aece68ef13f369ee4ef7d738db52b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81d9c487e504e7d106b9b717e93b17527aece68ef13f369ee4ef7d738db52b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81d9c487e504e7d106b9b717e93b17527aece68ef13f369ee4ef7d738db52b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81d9c487e504e7d106b9b717e93b17527aece68ef13f369ee4ef7d738db52b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:59 compute-0 podman[245227]: 2026-01-31 08:22:59.440365211 +0000 UTC m=+0.022255065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:59 compute-0 nova_compute[240062]: 2026-01-31 08:22:59.535 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:59 compute-0 nova_compute[240062]: 2026-01-31 08:22:59.536 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:22:59 compute-0 nova_compute[240062]: 2026-01-31 08:22:59.537 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:22:59 compute-0 podman[245227]: 2026-01-31 08:22:59.557592676 +0000 UTC m=+0.139482490 container init 004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:22:59 compute-0 podman[245227]: 2026-01-31 08:22:59.562751028 +0000 UTC m=+0.144640802 container start 004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bell, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:22:59 compute-0 podman[245227]: 2026-01-31 08:22:59.581859555 +0000 UTC m=+0.163749429 container attach 004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bell, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:22:59 compute-0 nova_compute[240062]: 2026-01-31 08:22:59.585 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:22:59 compute-0 nova_compute[240062]: 2026-01-31 08:22:59.586 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:59 compute-0 eager_bell[245243]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:22:59 compute-0 eager_bell[245243]: --> All data devices are unavailable
Jan 31 08:22:59 compute-0 systemd[1]: libpod-004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34.scope: Deactivated successfully.
Jan 31 08:22:59 compute-0 podman[245227]: 2026-01-31 08:22:59.961614143 +0000 UTC m=+0.543503927 container died 004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e81d9c487e504e7d106b9b717e93b17527aece68ef13f369ee4ef7d738db52b8-merged.mount: Deactivated successfully.
Jan 31 08:23:00 compute-0 podman[245227]: 2026-01-31 08:23:00.054451524 +0000 UTC m=+0.636341298 container remove 004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bell, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:23:00 compute-0 systemd[1]: libpod-conmon-004d0e2ba1bedeb2b11e741efbb1ea5e60a8173ca6a2464bfa31c783d2cf0d34.scope: Deactivated successfully.
Jan 31 08:23:00 compute-0 sudo[245150]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:00 compute-0 sudo[245276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:00 compute-0 sudo[245276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:00 compute-0 sudo[245276]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:00 compute-0 sudo[245301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:23:00 compute-0 sudo[245301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:00 compute-0 nova_compute[240062]: 2026-01-31 08:23:00.200 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:00 compute-0 podman[245338]: 2026-01-31 08:23:00.46277821 +0000 UTC m=+0.040991292 container create fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:23:00 compute-0 systemd[1]: Started libpod-conmon-fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59.scope.
Jan 31 08:23:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:00 compute-0 podman[245338]: 2026-01-31 08:23:00.443246571 +0000 UTC m=+0.021459683 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:00 compute-0 podman[245338]: 2026-01-31 08:23:00.53997021 +0000 UTC m=+0.118183322 container init fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_carver, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:00 compute-0 podman[245338]: 2026-01-31 08:23:00.544424634 +0000 UTC m=+0.122637716 container start fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:23:00 compute-0 goofy_carver[245354]: 167 167
Jan 31 08:23:00 compute-0 systemd[1]: libpod-fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59.scope: Deactivated successfully.
Jan 31 08:23:00 compute-0 podman[245338]: 2026-01-31 08:23:00.562122731 +0000 UTC m=+0.140335823 container attach fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:23:00 compute-0 podman[245338]: 2026-01-31 08:23:00.562524483 +0000 UTC m=+0.140737585 container died fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b97957256b9add0de39892098cdaa19a18c9cc4d33c2a4a4a1be78b225c2721-merged.mount: Deactivated successfully.
Jan 31 08:23:00 compute-0 podman[245338]: 2026-01-31 08:23:00.693792054 +0000 UTC m=+0.272005156 container remove fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:23:00 compute-0 systemd[1]: libpod-conmon-fcdfc2933454fd6545ac6e03d12e696455838bab1a1034f4f23ba8b5269dad59.scope: Deactivated successfully.
Jan 31 08:23:00 compute-0 podman[245380]: 2026-01-31 08:23:00.842023724 +0000 UTC m=+0.061168079 container create 3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:00 compute-0 systemd[1]: Started libpod-conmon-3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613.scope.
Jan 31 08:23:00 compute-0 podman[245380]: 2026-01-31 08:23:00.808273313 +0000 UTC m=+0.027417688 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e502c4b7c9954e47fb5c06f9dc06c0a371d65ff8db7ff21bf3bc82c38fd148/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e502c4b7c9954e47fb5c06f9dc06c0a371d65ff8db7ff21bf3bc82c38fd148/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e502c4b7c9954e47fb5c06f9dc06c0a371d65ff8db7ff21bf3bc82c38fd148/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e502c4b7c9954e47fb5c06f9dc06c0a371d65ff8db7ff21bf3bc82c38fd148/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:00 compute-0 podman[245380]: 2026-01-31 08:23:00.949172441 +0000 UTC m=+0.168316826 container init 3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:23:00 compute-0 podman[245380]: 2026-01-31 08:23:00.954312782 +0000 UTC m=+0.173457137 container start 3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:23:00 compute-0 podman[245380]: 2026-01-31 08:23:00.964257066 +0000 UTC m=+0.183401421 container attach 3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:23:01 compute-0 nova_compute[240062]: 2026-01-31 08:23:01.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:01 compute-0 nova_compute[240062]: 2026-01-31 08:23:01.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:01 compute-0 nova_compute[240062]: 2026-01-31 08:23:01.157 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]: {
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:     "0": [
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:         {
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "devices": [
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "/dev/loop3"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             ],
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_name": "ceph_lv0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_size": "21470642176",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "name": "ceph_lv0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "tags": {
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cluster_name": "ceph",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.crush_device_class": "",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.encrypted": "0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.objectstore": "bluestore",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osd_id": "0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.type": "block",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.vdo": "0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.with_tpm": "0"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             },
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "type": "block",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "vg_name": "ceph_vg0"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:         }
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:     ],
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:     "1": [
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:         {
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "devices": [
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "/dev/loop4"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             ],
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_name": "ceph_lv1",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_size": "21470642176",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "name": "ceph_lv1",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "tags": {
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cluster_name": "ceph",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.crush_device_class": "",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.encrypted": "0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.objectstore": "bluestore",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osd_id": "1",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.type": "block",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.vdo": "0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.with_tpm": "0"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             },
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "type": "block",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "vg_name": "ceph_vg1"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:         }
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:     ],
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:     "2": [
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:         {
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "devices": [
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "/dev/loop5"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             ],
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_name": "ceph_lv2",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_size": "21470642176",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "name": "ceph_lv2",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "tags": {
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.cluster_name": "ceph",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.crush_device_class": "",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.encrypted": "0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.objectstore": "bluestore",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osd_id": "2",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.type": "block",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.vdo": "0",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:                 "ceph.with_tpm": "0"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             },
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "type": "block",
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:             "vg_name": "ceph_vg2"
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:         }
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]:     ]
Jan 31 08:23:01 compute-0 dreamy_noyce[245397]: }
Jan 31 08:23:01 compute-0 ceph-mon[75294]: pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:01 compute-0 systemd[1]: libpod-3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613.scope: Deactivated successfully.
Jan 31 08:23:01 compute-0 podman[245380]: 2026-01-31 08:23:01.22424389 +0000 UTC m=+0.443388245 container died 3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4e502c4b7c9954e47fb5c06f9dc06c0a371d65ff8db7ff21bf3bc82c38fd148-merged.mount: Deactivated successfully.
Jan 31 08:23:01 compute-0 podman[245380]: 2026-01-31 08:23:01.367857443 +0000 UTC m=+0.587001798 container remove 3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:01 compute-0 systemd[1]: libpod-conmon-3be16a63baf7bcdced9ae41668b4d395a444fc35fe14fece5fae1028ccfe6613.scope: Deactivated successfully.
Jan 31 08:23:01 compute-0 sudo[245301]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:01 compute-0 sudo[245420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:01 compute-0 sudo[245420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:01 compute-0 sudo[245420]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:01 compute-0 sudo[245445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:23:01 compute-0 sudo[245445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:01 compute-0 podman[245482]: 2026-01-31 08:23:01.858533811 +0000 UTC m=+0.048100258 container create 41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:23:01 compute-0 systemd[1]: Started libpod-conmon-41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68.scope.
Jan 31 08:23:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:01 compute-0 podman[245482]: 2026-01-31 08:23:01.829767887 +0000 UTC m=+0.019334334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:01 compute-0 podman[245482]: 2026-01-31 08:23:01.967791925 +0000 UTC m=+0.157358392 container init 41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:23:01 compute-0 podman[245482]: 2026-01-31 08:23:01.974580513 +0000 UTC m=+0.164146960 container start 41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:23:01 compute-0 jovial_cori[245498]: 167 167
Jan 31 08:23:01 compute-0 systemd[1]: libpod-41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68.scope: Deactivated successfully.
Jan 31 08:23:01 compute-0 podman[245482]: 2026-01-31 08:23:01.987990033 +0000 UTC m=+0.177556480 container attach 41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:01 compute-0 podman[245482]: 2026-01-31 08:23:01.988400164 +0000 UTC m=+0.177966611 container died 41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e9c1e2e232f20674ae11af289a875ce3a1a1abeb1016ee493f5f4add437eb3-merged.mount: Deactivated successfully.
Jan 31 08:23:02 compute-0 podman[245482]: 2026-01-31 08:23:02.10023402 +0000 UTC m=+0.289800467 container remove 41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:02 compute-0 systemd[1]: libpod-conmon-41293f99746c8ae094c5ed205948203c0489a31daa7519890d8f1a3f4fbbca68.scope: Deactivated successfully.
Jan 31 08:23:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:02 compute-0 podman[245525]: 2026-01-31 08:23:02.232723705 +0000 UTC m=+0.054624158 container create 52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:23:02 compute-0 podman[245525]: 2026-01-31 08:23:02.197270537 +0000 UTC m=+0.019171000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:02 compute-0 systemd[1]: Started libpod-conmon-52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53.scope.
Jan 31 08:23:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3653c0a5f75151fe808df124645f546ff868deea8b7901ceb60caf7767e41fb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3653c0a5f75151fe808df124645f546ff868deea8b7901ceb60caf7767e41fb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3653c0a5f75151fe808df124645f546ff868deea8b7901ceb60caf7767e41fb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3653c0a5f75151fe808df124645f546ff868deea8b7901ceb60caf7767e41fb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:02 compute-0 podman[245525]: 2026-01-31 08:23:02.336335864 +0000 UTC m=+0.158236347 container init 52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:02 compute-0 podman[245525]: 2026-01-31 08:23:02.341723152 +0000 UTC m=+0.163623605 container start 52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:23:02 compute-0 podman[245525]: 2026-01-31 08:23:02.353324363 +0000 UTC m=+0.175224836 container attach 52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:02 compute-0 lvm[245618]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:23:02 compute-0 lvm[245621]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:23:02 compute-0 lvm[245621]: VG ceph_vg1 finished
Jan 31 08:23:02 compute-0 lvm[245618]: VG ceph_vg0 finished
Jan 31 08:23:02 compute-0 lvm[245622]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:23:02 compute-0 lvm[245622]: VG ceph_vg2 finished
Jan 31 08:23:03 compute-0 sleepy_wu[245541]: {}
Jan 31 08:23:03 compute-0 systemd[1]: libpod-52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53.scope: Deactivated successfully.
Jan 31 08:23:03 compute-0 systemd[1]: libpod-52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53.scope: Consumed 1.041s CPU time.
Jan 31 08:23:03 compute-0 conmon[245541]: conmon 52053b74df454b28f6ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53.scope/container/memory.events
Jan 31 08:23:03 compute-0 podman[245626]: 2026-01-31 08:23:03.130285406 +0000 UTC m=+0.021314888 container died 52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3653c0a5f75151fe808df124645f546ff868deea8b7901ceb60caf7767e41fb3-merged.mount: Deactivated successfully.
Jan 31 08:23:03 compute-0 ceph-mon[75294]: pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:03 compute-0 podman[245626]: 2026-01-31 08:23:03.228325611 +0000 UTC m=+0.119355063 container remove 52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:03 compute-0 systemd[1]: libpod-conmon-52053b74df454b28f6ca4567b4c89d3546d72c1d309a0ed6f2336e948af48c53.scope: Deactivated successfully.
Jan 31 08:23:03 compute-0 sudo[245445]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:23:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:23:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:23:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:23:03 compute-0 sudo[245639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:23:03 compute-0 sudo[245639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 sudo[245639]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:23:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:23:05 compute-0 ceph-mon[75294]: pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.496157195641518e-07 of space, bias 1.0, pg target 0.00019488471586924554 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.070672423406264e-06 of space, bias 4.0, pg target 0.002484806908087517 quantized to 16 (current 16)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:23:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.232146) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847787232206, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 383, "num_deletes": 251, "total_data_size": 238375, "memory_usage": 245088, "flush_reason": "Manual Compaction"}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847787239583, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 236101, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18614, "largest_seqno": 18996, "table_properties": {"data_size": 233747, "index_size": 454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6190, "raw_average_key_size": 19, "raw_value_size": 229022, "raw_average_value_size": 731, "num_data_blocks": 20, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847775, "oldest_key_time": 1769847775, "file_creation_time": 1769847787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 7520 microseconds, and 1421 cpu microseconds.
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.239644) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 236101 bytes OK
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.239686) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.245529) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.245555) EVENT_LOG_v1 {"time_micros": 1769847787245549, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.245576) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 235875, prev total WAL file size 235875, number of live WAL files 2.
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.246057) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(230KB)], [41(9690KB)]
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847787246167, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10158761, "oldest_snapshot_seqno": -1}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4337 keys, 6804754 bytes, temperature: kUnknown
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847787304128, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6804754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6775997, "index_size": 16807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105973, "raw_average_key_size": 24, "raw_value_size": 6697779, "raw_average_value_size": 1544, "num_data_blocks": 705, "num_entries": 4337, "num_filter_entries": 4337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769847787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.304421) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6804754 bytes
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.311923) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.9 rd, 117.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.5 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(71.8) write-amplify(28.8) OK, records in: 4850, records dropped: 513 output_compression: NoCompression
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.311965) EVENT_LOG_v1 {"time_micros": 1769847787311949, "job": 20, "event": "compaction_finished", "compaction_time_micros": 58071, "compaction_time_cpu_micros": 14129, "output_level": 6, "num_output_files": 1, "total_output_size": 6804754, "num_input_records": 4850, "num_output_records": 4337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847787312368, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847787313679, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.245865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.313766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.313771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.313773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.313776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:07 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:23:07.313778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:07 compute-0 ceph-mon[75294]: pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:09 compute-0 ceph-mon[75294]: pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:11 compute-0 ceph-mon[75294]: pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:13 compute-0 ceph-mon[75294]: pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:15 compute-0 ceph-mon[75294]: pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:16 compute-0 ceph-mon[75294]: pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:19 compute-0 ceph-mon[75294]: pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:21 compute-0 ceph-mon[75294]: pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:23 compute-0 ceph-mon[75294]: pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:25 compute-0 ceph-mon[75294]: pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:25 compute-0 podman[245664]: 2026-01-31 08:23:25.174462454 +0000 UTC m=+0.046174768 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:23:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:27 compute-0 ceph-mon[75294]: pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:29 compute-0 ceph-mon[75294]: pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:29 compute-0 podman[245683]: 2026-01-31 08:23:29.204771074 +0000 UTC m=+0.076591302 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 08:23:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:31 compute-0 ceph-mon[75294]: pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:33 compute-0 ceph-mon[75294]: pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:35 compute-0 ceph-mon[75294]: pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:37 compute-0 ceph-mon[75294]: pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:23:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767563606' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:23:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:23:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767563606' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:23:39 compute-0 ceph-mon[75294]: pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2767563606' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:23:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2767563606' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:23:41 compute-0 ceph-mon[75294]: pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:43 compute-0 ceph-mon[75294]: pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:45 compute-0 ceph-mon[75294]: pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:23:46.964 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:23:46.965 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:23:46.965 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:47 compute-0 ceph-mon[75294]: pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:49 compute-0 ceph-mon[75294]: pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:23:50
Jan 31 08:23:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:23:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:23:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['volumes', 'vms', 'images', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data']
Jan 31 08:23:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:23:51 compute-0 ceph-mon[75294]: pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:52 compute-0 ceph-mon[75294]: pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:55 compute-0 ceph-mon[75294]: pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:23:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:56 compute-0 podman[245710]: 2026-01-31 08:23:56.192977899 +0000 UTC m=+0.065875004 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:23:57 compute-0 ceph-mon[75294]: pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:57 compute-0 nova_compute[240062]: 2026-01-31 08:23:57.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:57 compute-0 nova_compute[240062]: 2026-01-31 08:23:57.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:57 compute-0 nova_compute[240062]: 2026-01-31 08:23:57.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:23:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.188 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.188 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.189 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.205 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.205 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.206 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.234 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.234 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.234 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.235 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.235 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:59 compute-0 ceph-mon[75294]: pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:23:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739101616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.760 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.878 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.879 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5139MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.879 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.879 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.936 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.936 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:23:59 compute-0 nova_compute[240062]: 2026-01-31 08:23:59.950 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:00 compute-0 podman[245771]: 2026-01-31 08:24:00.182283468 +0000 UTC m=+0.055367679 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:24:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:24:00 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3682372580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:24:00 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2739101616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:24:00 compute-0 nova_compute[240062]: 2026-01-31 08:24:00.455 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:00 compute-0 nova_compute[240062]: 2026-01-31 08:24:00.459 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:00 compute-0 nova_compute[240062]: 2026-01-31 08:24:00.477 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:00 compute-0 nova_compute[240062]: 2026-01-31 08:24:00.479 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:24:00 compute-0 nova_compute[240062]: 2026-01-31 08:24:00.479 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:01 compute-0 nova_compute[240062]: 2026-01-31 08:24:01.478 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:01 compute-0 nova_compute[240062]: 2026-01-31 08:24:01.479 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:01 compute-0 ceph-mon[75294]: pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:01 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3682372580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:24:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:02 compute-0 nova_compute[240062]: 2026-01-31 08:24:02.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:02 compute-0 ceph-mon[75294]: pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:03 compute-0 nova_compute[240062]: 2026-01-31 08:24:03.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:03 compute-0 sudo[245799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:03 compute-0 sudo[245799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:03 compute-0 sudo[245799]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:03 compute-0 sudo[245824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:24:03 compute-0 sudo[245824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:03 compute-0 sudo[245824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:24:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:24:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:24:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:24:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:24:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:24:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:24:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:24:03 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:24:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:24:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:04 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:24:04 compute-0 sudo[245880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:04 compute-0 sudo[245880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:04 compute-0 sudo[245880]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:04 compute-0 sudo[245905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:24:04 compute-0 sudo[245905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:04 compute-0 podman[245942]: 2026-01-31 08:24:04.300904017 +0000 UTC m=+0.018517705 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:04 compute-0 podman[245942]: 2026-01-31 08:24:04.490045883 +0000 UTC m=+0.207659561 container create b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_heisenberg, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:04 compute-0 systemd[1]: Started libpod-conmon-b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de.scope.
Jan 31 08:24:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:04 compute-0 podman[245942]: 2026-01-31 08:24:04.96951615 +0000 UTC m=+0.687129828 container init b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_heisenberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:24:04 compute-0 podman[245942]: 2026-01-31 08:24:04.980732103 +0000 UTC m=+0.698345751 container start b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_heisenberg, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:04 compute-0 suspicious_heisenberg[245958]: 167 167
Jan 31 08:24:04 compute-0 systemd[1]: libpod-b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de.scope: Deactivated successfully.
Jan 31 08:24:05 compute-0 podman[245942]: 2026-01-31 08:24:05.286120848 +0000 UTC m=+1.003734526 container attach b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:24:05 compute-0 podman[245942]: 2026-01-31 08:24:05.286977641 +0000 UTC m=+1.004591299 container died b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:24:05 compute-0 ceph-mon[75294]: pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:24:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:24:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:24:05 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-49e543af3fafa0086f674e8362245240fc98d2b7cea6334ddc4ea2b7ccc066f4-merged.mount: Deactivated successfully.
Jan 31 08:24:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:24:06 compute-0 podman[245942]: 2026-01-31 08:24:06.45307657 +0000 UTC m=+2.170690218 container remove b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_heisenberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.496157195641518e-07 of space, bias 1.0, pg target 0.00019488471586924554 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.070672423406264e-06 of space, bias 4.0, pg target 0.002484806908087517 quantized to 16 (current 16)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:24:06 compute-0 systemd[1]: libpod-conmon-b65863e5e533c59be41218f19db07f7b21b44b00d5436ce62bf6ea87fb9fe5de.scope: Deactivated successfully.
Jan 31 08:24:06 compute-0 podman[245984]: 2026-01-31 08:24:06.552444448 +0000 UTC m=+0.019098370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:06 compute-0 podman[245984]: 2026-01-31 08:24:06.795223816 +0000 UTC m=+0.261877738 container create 7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:24:06 compute-0 ceph-mon[75294]: pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:06 compute-0 systemd[1]: Started libpod-conmon-7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659.scope.
Jan 31 08:24:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f52378d47a12576d53c300c5f89dea5756a609b5e3f262cbebe685d8a221be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f52378d47a12576d53c300c5f89dea5756a609b5e3f262cbebe685d8a221be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f52378d47a12576d53c300c5f89dea5756a609b5e3f262cbebe685d8a221be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f52378d47a12576d53c300c5f89dea5756a609b5e3f262cbebe685d8a221be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f52378d47a12576d53c300c5f89dea5756a609b5e3f262cbebe685d8a221be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 podman[245984]: 2026-01-31 08:24:06.911299211 +0000 UTC m=+0.377953113 container init 7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_benz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:24:06 compute-0 podman[245984]: 2026-01-31 08:24:06.916128047 +0000 UTC m=+0.382781959 container start 7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:24:06 compute-0 podman[245984]: 2026-01-31 08:24:06.93915846 +0000 UTC m=+0.405812392 container attach 7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_benz, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:24:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:07 compute-0 focused_benz[246000]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:24:07 compute-0 focused_benz[246000]: --> All data devices are unavailable
Jan 31 08:24:07 compute-0 systemd[1]: libpod-7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659.scope: Deactivated successfully.
Jan 31 08:24:07 compute-0 podman[245984]: 2026-01-31 08:24:07.287701963 +0000 UTC m=+0.754355885 container died 7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-87f52378d47a12576d53c300c5f89dea5756a609b5e3f262cbebe685d8a221be-merged.mount: Deactivated successfully.
Jan 31 08:24:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:07 compute-0 podman[245984]: 2026-01-31 08:24:07.987743976 +0000 UTC m=+1.454397878 container remove 7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_benz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:24:07 compute-0 systemd[1]: libpod-conmon-7edd3676ed6f7c6e474795663c5f3ae657de6e5ad6c75f3f4e041a3d2722c659.scope: Deactivated successfully.
Jan 31 08:24:08 compute-0 sudo[245905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:08 compute-0 sudo[246035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:08 compute-0 sudo[246035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:08 compute-0 sudo[246035]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:08 compute-0 sudo[246060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:24:08 compute-0 sudo[246060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:08 compute-0 podman[246097]: 2026-01-31 08:24:08.39670333 +0000 UTC m=+0.059368804 container create 4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:08 compute-0 systemd[1]: Started libpod-conmon-4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a.scope.
Jan 31 08:24:08 compute-0 podman[246097]: 2026-01-31 08:24:08.35427262 +0000 UTC m=+0.016938124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:08 compute-0 podman[246097]: 2026-01-31 08:24:08.558531661 +0000 UTC m=+0.221197135 container init 4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:24:08 compute-0 podman[246097]: 2026-01-31 08:24:08.563148851 +0000 UTC m=+0.225814325 container start 4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:08 compute-0 systemd[1]: libpod-4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a.scope: Deactivated successfully.
Jan 31 08:24:08 compute-0 reverent_ritchie[246113]: 167 167
Jan 31 08:24:08 compute-0 conmon[246113]: conmon 4380761e68881dd3180b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a.scope/container/memory.events
Jan 31 08:24:08 compute-0 podman[246097]: 2026-01-31 08:24:08.600785675 +0000 UTC m=+0.263451149 container attach 4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:08 compute-0 podman[246097]: 2026-01-31 08:24:08.602626764 +0000 UTC m=+0.265292258 container died 4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-83b4e5bc07ef7a5d0db33fdaf9f61c7d993d29e08d1569251c54e39d933cd458-merged.mount: Deactivated successfully.
Jan 31 08:24:08 compute-0 podman[246097]: 2026-01-31 08:24:08.796412301 +0000 UTC m=+0.459077775 container remove 4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ritchie, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:24:08 compute-0 systemd[1]: libpod-conmon-4380761e68881dd3180bb6cf1128012ac0e8e170933ab89fc181f26e3a88f94a.scope: Deactivated successfully.
Jan 31 08:24:08 compute-0 podman[246137]: 2026-01-31 08:24:08.933921786 +0000 UTC m=+0.052801921 container create cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:24:08 compute-0 systemd[1]: Started libpod-conmon-cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9.scope.
Jan 31 08:24:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735d6bbfd6ac6fb4fd3634808ff9181b40511e39b0372c5a44c0fb1d0b80d79f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735d6bbfd6ac6fb4fd3634808ff9181b40511e39b0372c5a44c0fb1d0b80d79f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735d6bbfd6ac6fb4fd3634808ff9181b40511e39b0372c5a44c0fb1d0b80d79f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735d6bbfd6ac6fb4fd3634808ff9181b40511e39b0372c5a44c0fb1d0b80d79f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:08 compute-0 podman[246137]: 2026-01-31 08:24:08.899193888 +0000 UTC m=+0.018074043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:09 compute-0 ceph-mon[75294]: pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:09 compute-0 podman[246137]: 2026-01-31 08:24:09.036386765 +0000 UTC m=+0.155266930 container init cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_bell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:24:09 compute-0 podman[246137]: 2026-01-31 08:24:09.042415593 +0000 UTC m=+0.161295728 container start cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:24:09 compute-0 podman[246137]: 2026-01-31 08:24:09.090579242 +0000 UTC m=+0.209459397 container attach cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_bell, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:24:09 compute-0 recursing_bell[246154]: {
Jan 31 08:24:09 compute-0 recursing_bell[246154]:     "0": [
Jan 31 08:24:09 compute-0 recursing_bell[246154]:         {
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "devices": [
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "/dev/loop3"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             ],
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_name": "ceph_lv0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_size": "21470642176",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "name": "ceph_lv0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "tags": {
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cluster_name": "ceph",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.crush_device_class": "",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.encrypted": "0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.objectstore": "bluestore",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osd_id": "0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.type": "block",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.vdo": "0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.with_tpm": "0"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             },
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "type": "block",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "vg_name": "ceph_vg0"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:         }
Jan 31 08:24:09 compute-0 recursing_bell[246154]:     ],
Jan 31 08:24:09 compute-0 recursing_bell[246154]:     "1": [
Jan 31 08:24:09 compute-0 recursing_bell[246154]:         {
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "devices": [
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "/dev/loop4"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             ],
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_name": "ceph_lv1",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_size": "21470642176",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "name": "ceph_lv1",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "tags": {
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cluster_name": "ceph",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.crush_device_class": "",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.encrypted": "0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.objectstore": "bluestore",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osd_id": "1",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.type": "block",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.vdo": "0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.with_tpm": "0"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             },
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "type": "block",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "vg_name": "ceph_vg1"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:         }
Jan 31 08:24:09 compute-0 recursing_bell[246154]:     ],
Jan 31 08:24:09 compute-0 recursing_bell[246154]:     "2": [
Jan 31 08:24:09 compute-0 recursing_bell[246154]:         {
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "devices": [
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "/dev/loop5"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             ],
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_name": "ceph_lv2",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_size": "21470642176",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "name": "ceph_lv2",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "tags": {
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.cluster_name": "ceph",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.crush_device_class": "",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.encrypted": "0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.objectstore": "bluestore",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osd_id": "2",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.type": "block",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.vdo": "0",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:                 "ceph.with_tpm": "0"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             },
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "type": "block",
Jan 31 08:24:09 compute-0 recursing_bell[246154]:             "vg_name": "ceph_vg2"
Jan 31 08:24:09 compute-0 recursing_bell[246154]:         }
Jan 31 08:24:09 compute-0 recursing_bell[246154]:     ]
Jan 31 08:24:09 compute-0 recursing_bell[246154]: }
Jan 31 08:24:09 compute-0 systemd[1]: libpod-cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9.scope: Deactivated successfully.
Jan 31 08:24:09 compute-0 podman[246137]: 2026-01-31 08:24:09.323273396 +0000 UTC m=+0.442153531 container died cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_bell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-735d6bbfd6ac6fb4fd3634808ff9181b40511e39b0372c5a44c0fb1d0b80d79f-merged.mount: Deactivated successfully.
Jan 31 08:24:09 compute-0 podman[246137]: 2026-01-31 08:24:09.713292364 +0000 UTC m=+0.832172499 container remove cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:24:09 compute-0 sudo[246060]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:09 compute-0 systemd[1]: libpod-conmon-cd1606c634bc7dfbbb09861cc6949eb70b42e4d9a34db810bfd47bf204e907d9.scope: Deactivated successfully.
Jan 31 08:24:09 compute-0 sudo[246175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:09 compute-0 sudo[246175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:09 compute-0 sudo[246175]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:09 compute-0 sudo[246200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:24:09 compute-0 sudo[246200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:10 compute-0 podman[246238]: 2026-01-31 08:24:10.115342327 +0000 UTC m=+0.020406005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:10 compute-0 podman[246238]: 2026-01-31 08:24:10.400451133 +0000 UTC m=+0.305514781 container create 6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_curie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:10 compute-0 systemd[1]: Started libpod-conmon-6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200.scope.
Jan 31 08:24:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:10 compute-0 podman[246238]: 2026-01-31 08:24:10.652807601 +0000 UTC m=+0.557871269 container init 6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:24:10 compute-0 podman[246238]: 2026-01-31 08:24:10.657049522 +0000 UTC m=+0.562113180 container start 6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_curie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:10 compute-0 focused_curie[246255]: 167 167
Jan 31 08:24:10 compute-0 systemd[1]: libpod-6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200.scope: Deactivated successfully.
Jan 31 08:24:10 compute-0 podman[246238]: 2026-01-31 08:24:10.664798084 +0000 UTC m=+0.569861752 container attach 6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_curie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:10 compute-0 podman[246238]: 2026-01-31 08:24:10.665357429 +0000 UTC m=+0.570421077 container died 6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d7daae676aab440c1bedca81f124cc49fa93e48c6d9bfef6e1cd1721d98bbf8-merged.mount: Deactivated successfully.
Jan 31 08:24:10 compute-0 podman[246238]: 2026-01-31 08:24:10.741361676 +0000 UTC m=+0.646425324 container remove 6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:24:10 compute-0 systemd[1]: libpod-conmon-6ed29406f3d77cebd8c083ecfdd7264919940eea5cdf760956832c30682f6200.scope: Deactivated successfully.
Jan 31 08:24:10 compute-0 podman[246281]: 2026-01-31 08:24:10.875461802 +0000 UTC m=+0.048179040 container create 3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:24:10 compute-0 systemd[1]: Started libpod-conmon-3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad.scope.
Jan 31 08:24:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5650617f7e9f17f88c1cc547deb34810b6c01b9250cb67a452d14e9d73c251/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5650617f7e9f17f88c1cc547deb34810b6c01b9250cb67a452d14e9d73c251/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5650617f7e9f17f88c1cc547deb34810b6c01b9250cb67a452d14e9d73c251/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5650617f7e9f17f88c1cc547deb34810b6c01b9250cb67a452d14e9d73c251/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:10 compute-0 podman[246281]: 2026-01-31 08:24:10.847165083 +0000 UTC m=+0.019882341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:10 compute-0 podman[246281]: 2026-01-31 08:24:10.967256073 +0000 UTC m=+0.139973341 container init 3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_fermat, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:24:10 compute-0 podman[246281]: 2026-01-31 08:24:10.972711715 +0000 UTC m=+0.145428953 container start 3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:10 compute-0 podman[246281]: 2026-01-31 08:24:10.981570467 +0000 UTC m=+0.154287705 container attach 3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:11 compute-0 ceph-mon[75294]: pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:11 compute-0 lvm[246373]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:24:11 compute-0 lvm[246373]: VG ceph_vg0 finished
Jan 31 08:24:11 compute-0 lvm[246376]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:24:11 compute-0 lvm[246376]: VG ceph_vg1 finished
Jan 31 08:24:11 compute-0 lvm[246378]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:24:11 compute-0 lvm[246378]: VG ceph_vg2 finished
Jan 31 08:24:11 compute-0 infallible_fermat[246297]: {}
Jan 31 08:24:11 compute-0 systemd[1]: libpod-3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad.scope: Deactivated successfully.
Jan 31 08:24:11 compute-0 systemd[1]: libpod-3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad.scope: Consumed 1.090s CPU time.
Jan 31 08:24:11 compute-0 podman[246281]: 2026-01-31 08:24:11.738148959 +0000 UTC m=+0.910866217 container died 3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_fermat, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a5650617f7e9f17f88c1cc547deb34810b6c01b9250cb67a452d14e9d73c251-merged.mount: Deactivated successfully.
Jan 31 08:24:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:11 compute-0 podman[246281]: 2026-01-31 08:24:11.959518057 +0000 UTC m=+1.132235285 container remove 3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_fermat, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:24:11 compute-0 systemd[1]: libpod-conmon-3ad33f7f41087b58184961cd283c8dea5a73e4c66beabfc03325ce40195b9cad.scope: Deactivated successfully.
Jan 31 08:24:11 compute-0 sudo[246200]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:24:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:24:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:24:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:24:12 compute-0 sudo[246393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:24:12 compute-0 sudo[246393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-0 sudo[246393]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:13 compute-0 ceph-mon[75294]: pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:24:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:24:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:15 compute-0 ceph-mon[75294]: pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:16 compute-0 ceph-mon[75294]: pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.246468) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847857246506, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 257, "total_data_size": 1056498, "memory_usage": 1082912, "flush_reason": "Manual Compaction"}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847857261498, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1046959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18997, "largest_seqno": 19808, "table_properties": {"data_size": 1042870, "index_size": 1805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8656, "raw_average_key_size": 18, "raw_value_size": 1034613, "raw_average_value_size": 2187, "num_data_blocks": 82, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847787, "oldest_key_time": 1769847787, "file_creation_time": 1769847857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 15085 microseconds, and 2483 cpu microseconds.
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.261553) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1046959 bytes OK
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.261574) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.265871) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.265925) EVENT_LOG_v1 {"time_micros": 1769847857265915, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.265953) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1052429, prev total WAL file size 1052429, number of live WAL files 2.
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.266511) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1022KB)], [44(6645KB)]
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847857266574, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7851713, "oldest_snapshot_seqno": -1}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4284 keys, 7704460 bytes, temperature: kUnknown
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847857326757, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7704460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7674636, "index_size": 18002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 105965, "raw_average_key_size": 24, "raw_value_size": 7595898, "raw_average_value_size": 1773, "num_data_blocks": 753, "num_entries": 4284, "num_filter_entries": 4284, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769847857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.326989) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7704460 bytes
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.331750) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.3 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.5 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(14.9) write-amplify(7.4) OK, records in: 4810, records dropped: 526 output_compression: NoCompression
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.331821) EVENT_LOG_v1 {"time_micros": 1769847857331775, "job": 22, "event": "compaction_finished", "compaction_time_micros": 60243, "compaction_time_cpu_micros": 12800, "output_level": 6, "num_output_files": 1, "total_output_size": 7704460, "num_input_records": 4810, "num_output_records": 4284, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847857332081, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847857332907, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.266403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.332992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.332996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.332999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.333001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:24:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:24:17.333002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:24:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:19 compute-0 ceph-mon[75294]: pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:21 compute-0 ceph-mon[75294]: pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:22 compute-0 ceph-mon[75294]: pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:25 compute-0 ceph-mon[75294]: pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:26 compute-0 ceph-mon[75294]: pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:27 compute-0 podman[246418]: 2026-01-31 08:24:27.202431813 +0000 UTC m=+0.075921036 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 31 08:24:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:29 compute-0 ceph-mon[75294]: pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:31 compute-0 podman[246439]: 2026-01-31 08:24:31.249367959 +0000 UTC m=+0.123485660 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:24:31 compute-0 ceph-mon[75294]: pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:33 compute-0 ceph-mon[75294]: pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:35 compute-0 ceph-mon[75294]: pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:38 compute-0 ceph-mon[75294]: pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:39 compute-0 ceph-mon[75294]: pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:24:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2276142420' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:24:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2276142420' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2276142420' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2276142420' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:41 compute-0 ceph-mon[75294]: pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:43 compute-0 ceph-mon[75294]: pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:45 compute-0 ceph-mon[75294]: pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:46 compute-0 ceph-mon[75294]: pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:24:46.966 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:24:46.966 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:24:46.966 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:49 compute-0 ceph-mon[75294]: pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:24:50
Jan 31 08:24:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:24:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:24:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data']
Jan 31 08:24:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:24:51 compute-0 ceph-mon[75294]: pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:52 compute-0 ceph-mon[75294]: pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:55 compute-0 ceph-mon[75294]: pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:24:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:57 compute-0 nova_compute[240062]: 2026-01-31 08:24:57.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:57 compute-0 nova_compute[240062]: 2026-01-31 08:24:57.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:24:57 compute-0 ceph-mon[75294]: pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:58 compute-0 nova_compute[240062]: 2026-01-31 08:24:58.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:58 compute-0 podman[246466]: 2026-01-31 08:24:58.166030391 +0000 UTC m=+0.037932473 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:24:59 compute-0 nova_compute[240062]: 2026-01-31 08:24:59.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:59 compute-0 ceph-mon[75294]: pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:00 compute-0 nova_compute[240062]: 2026-01-31 08:25:00.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:00 compute-0 nova_compute[240062]: 2026-01-31 08:25:00.283 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:00 compute-0 nova_compute[240062]: 2026-01-31 08:25:00.283 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:00 compute-0 nova_compute[240062]: 2026-01-31 08:25:00.283 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:00 compute-0 nova_compute[240062]: 2026-01-31 08:25:00.284 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:25:00 compute-0 nova_compute[240062]: 2026-01-31 08:25:00.284 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:25:00 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/789218215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:00 compute-0 nova_compute[240062]: 2026-01-31 08:25:00.901 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:00 compute-0 ceph-mon[75294]: pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.022 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.023 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.023 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.024 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.276 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.276 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.290 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:25:01 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852450472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.793 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.799 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.821 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.824 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:25:01 compute-0 nova_compute[240062]: 2026-01-31 08:25:01.824 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/789218215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/852450472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:02 compute-0 podman[246532]: 2026-01-31 08:25:02.238558134 +0000 UTC m=+0.101984347 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:25:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:02 compute-0 nova_compute[240062]: 2026-01-31 08:25:02.820 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:02 compute-0 nova_compute[240062]: 2026-01-31 08:25:02.821 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:02 compute-0 nova_compute[240062]: 2026-01-31 08:25:02.821 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:25:02 compute-0 nova_compute[240062]: 2026-01-31 08:25:02.821 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:25:02 compute-0 nova_compute[240062]: 2026-01-31 08:25:02.980 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:25:03 compute-0 nova_compute[240062]: 2026-01-31 08:25:03.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:03 compute-0 nova_compute[240062]: 2026-01-31 08:25:03.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:03 compute-0 ceph-mon[75294]: pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:05 compute-0 nova_compute[240062]: 2026-01-31 08:25:05.157 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:05 compute-0 sshd-session[246558]: Invalid user ubuntu from 80.94.92.182 port 54434
Jan 31 08:25:05 compute-0 sshd-session[246558]: Connection closed by invalid user ubuntu 80.94.92.182 port 54434 [preauth]
Jan 31 08:25:05 compute-0 ceph-mon[75294]: pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.496157195641518e-07 of space, bias 1.0, pg target 0.00019488471586924554 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.070672423406264e-06 of space, bias 4.0, pg target 0.002484806908087517 quantized to 16 (current 16)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:25:06 compute-0 ceph-mon[75294]: pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:09 compute-0 ceph-mon[75294]: pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:10 compute-0 ceph-mon[75294]: pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:12 compute-0 sudo[246560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:12 compute-0 sudo[246560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:12 compute-0 sudo[246560]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:12 compute-0 sudo[246585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:25:12 compute-0 sudo[246585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:12 compute-0 sudo[246585]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:25:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:12 compute-0 sudo[246641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:12 compute-0 sudo[246641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:12 compute-0 sudo[246641]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:12 compute-0 sudo[246666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:25:12 compute-0 sudo[246666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:12 compute-0 ceph-mon[75294]: pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:25:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:25:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:13 compute-0 podman[246704]: 2026-01-31 08:25:13.003229512 +0000 UTC m=+0.047557855 container create dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_matsumoto, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:25:13 compute-0 systemd[1]: Started libpod-conmon-dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8.scope.
Jan 31 08:25:13 compute-0 podman[246704]: 2026-01-31 08:25:12.97334143 +0000 UTC m=+0.017669793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:13 compute-0 podman[246704]: 2026-01-31 08:25:13.121256027 +0000 UTC m=+0.165584380 container init dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:25:13 compute-0 podman[246704]: 2026-01-31 08:25:13.1277264 +0000 UTC m=+0.172054733 container start dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:13 compute-0 charming_matsumoto[246720]: 167 167
Jan 31 08:25:13 compute-0 systemd[1]: libpod-dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8.scope: Deactivated successfully.
Jan 31 08:25:13 compute-0 podman[246704]: 2026-01-31 08:25:13.149594912 +0000 UTC m=+0.193923275 container attach dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_matsumoto, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:13 compute-0 podman[246704]: 2026-01-31 08:25:13.150641537 +0000 UTC m=+0.194969900 container died dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_matsumoto, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-37d648bfc19eae6de9363e58dccfaa72e3ec8b6917040a768581ae4b205d5abc-merged.mount: Deactivated successfully.
Jan 31 08:25:13 compute-0 podman[246704]: 2026-01-31 08:25:13.323371805 +0000 UTC m=+0.367700138 container remove dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:25:13 compute-0 systemd[1]: libpod-conmon-dd8784ffe82e910e912be8d0f1bd3ab13ffd4d42efc0d278aadbd0ab888efdd8.scope: Deactivated successfully.
Jan 31 08:25:13 compute-0 podman[246743]: 2026-01-31 08:25:13.446859599 +0000 UTC m=+0.044738447 container create a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:25:13 compute-0 systemd[1]: Started libpod-conmon-a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f.scope.
Jan 31 08:25:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de2e308f89fd81df13700e056997b6c36f41c575bae06d14da93ccf68e0ded7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de2e308f89fd81df13700e056997b6c36f41c575bae06d14da93ccf68e0ded7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de2e308f89fd81df13700e056997b6c36f41c575bae06d14da93ccf68e0ded7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de2e308f89fd81df13700e056997b6c36f41c575bae06d14da93ccf68e0ded7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de2e308f89fd81df13700e056997b6c36f41c575bae06d14da93ccf68e0ded7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:13 compute-0 podman[246743]: 2026-01-31 08:25:13.419432296 +0000 UTC m=+0.017311164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:13 compute-0 podman[246743]: 2026-01-31 08:25:13.543063083 +0000 UTC m=+0.140941951 container init a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:25:13 compute-0 podman[246743]: 2026-01-31 08:25:13.548508653 +0000 UTC m=+0.146387521 container start a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bardeen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:25:13 compute-0 podman[246743]: 2026-01-31 08:25:13.55635961 +0000 UTC m=+0.154238478 container attach a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bardeen, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:13 compute-0 ecstatic_bardeen[246760]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:25:13 compute-0 ecstatic_bardeen[246760]: --> All data devices are unavailable
Jan 31 08:25:13 compute-0 systemd[1]: libpod-a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f.scope: Deactivated successfully.
Jan 31 08:25:13 compute-0 podman[246743]: 2026-01-31 08:25:13.912494441 +0000 UTC m=+0.510373289 container died a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3de2e308f89fd81df13700e056997b6c36f41c575bae06d14da93ccf68e0ded7-merged.mount: Deactivated successfully.
Jan 31 08:25:14 compute-0 podman[246743]: 2026-01-31 08:25:14.57688481 +0000 UTC m=+1.174763658 container remove a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:25:14 compute-0 sudo[246666]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:14 compute-0 sudo[246793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:14 compute-0 sudo[246793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:14 compute-0 sudo[246793]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:14 compute-0 systemd[1]: libpod-conmon-a627435ffae94a6c1ff79c5950ce365879767f48f69c6023c4dac07c15afa63f.scope: Deactivated successfully.
Jan 31 08:25:14 compute-0 sudo[246818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:25:14 compute-0 sudo[246818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:15 compute-0 podman[246854]: 2026-01-31 08:25:14.911437527 +0000 UTC m=+0.018293977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:15 compute-0 podman[246854]: 2026-01-31 08:25:15.214150464 +0000 UTC m=+0.321006894 container create 3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shockley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:15 compute-0 ceph-mon[75294]: pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:15 compute-0 systemd[1]: Started libpod-conmon-3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe.scope.
Jan 31 08:25:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:15 compute-0 podman[246854]: 2026-01-31 08:25:15.622323806 +0000 UTC m=+0.729180266 container init 3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:25:15 compute-0 podman[246854]: 2026-01-31 08:25:15.628119524 +0000 UTC m=+0.734975954 container start 3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shockley, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:25:15 compute-0 gallant_shockley[246870]: 167 167
Jan 31 08:25:15 compute-0 systemd[1]: libpod-3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe.scope: Deactivated successfully.
Jan 31 08:25:15 compute-0 podman[246854]: 2026-01-31 08:25:15.725431544 +0000 UTC m=+0.832288014 container attach 3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:25:15 compute-0 podman[246854]: 2026-01-31 08:25:15.72656528 +0000 UTC m=+0.833421750 container died 3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:25:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-721698aef6fb2d1c035623d5d86e8db9586b31ee3e8a23ba328e7261191ac67e-merged.mount: Deactivated successfully.
Jan 31 08:25:16 compute-0 sshd-session[246886]: Invalid user solv from 193.32.162.145 port 35134
Jan 31 08:25:16 compute-0 sshd-session[246886]: Connection closed by invalid user solv 193.32.162.145 port 35134 [preauth]
Jan 31 08:25:16 compute-0 podman[246854]: 2026-01-31 08:25:16.7998491 +0000 UTC m=+1.906705520 container remove 3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shockley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:25:16 compute-0 systemd[1]: libpod-conmon-3f6c83bd06506012f0085967e0a19a47d34a7db7046d5aa5f2be8ae7e39aa2fe.scope: Deactivated successfully.
Jan 31 08:25:17 compute-0 podman[246896]: 2026-01-31 08:25:16.921118442 +0000 UTC m=+0.018929743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:17 compute-0 podman[246896]: 2026-01-31 08:25:17.048314543 +0000 UTC m=+0.146125794 container create 751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:25:17 compute-0 systemd[1]: Started libpod-conmon-751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5.scope.
Jan 31 08:25:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3fd559ab24f39352292efd475e058ecdfeb10fb8d467595c88aed6897c0c41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3fd559ab24f39352292efd475e058ecdfeb10fb8d467595c88aed6897c0c41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3fd559ab24f39352292efd475e058ecdfeb10fb8d467595c88aed6897c0c41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e3fd559ab24f39352292efd475e058ecdfeb10fb8d467595c88aed6897c0c41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:17 compute-0 podman[246896]: 2026-01-31 08:25:17.404312311 +0000 UTC m=+0.502123592 container init 751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:25:17 compute-0 podman[246896]: 2026-01-31 08:25:17.410675682 +0000 UTC m=+0.508486933 container start 751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:25:17 compute-0 podman[246896]: 2026-01-31 08:25:17.549242756 +0000 UTC m=+0.647054007 container attach 751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:25:17 compute-0 ceph-mon[75294]: pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]: {
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:     "0": [
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:         {
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "devices": [
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "/dev/loop3"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             ],
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_name": "ceph_lv0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_size": "21470642176",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "name": "ceph_lv0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "tags": {
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cluster_name": "ceph",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.crush_device_class": "",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.encrypted": "0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.objectstore": "bluestore",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osd_id": "0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.type": "block",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.vdo": "0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.with_tpm": "0"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             },
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "type": "block",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "vg_name": "ceph_vg0"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:         }
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:     ],
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:     "1": [
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:         {
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "devices": [
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "/dev/loop4"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             ],
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_name": "ceph_lv1",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_size": "21470642176",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "name": "ceph_lv1",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "tags": {
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cluster_name": "ceph",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.crush_device_class": "",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.encrypted": "0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.objectstore": "bluestore",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osd_id": "1",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.type": "block",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.vdo": "0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.with_tpm": "0"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             },
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "type": "block",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "vg_name": "ceph_vg1"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:         }
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:     ],
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:     "2": [
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:         {
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "devices": [
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "/dev/loop5"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             ],
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_name": "ceph_lv2",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_size": "21470642176",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "name": "ceph_lv2",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "tags": {
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.cluster_name": "ceph",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.crush_device_class": "",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.encrypted": "0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.objectstore": "bluestore",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osd_id": "2",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.type": "block",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.vdo": "0",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:                 "ceph.with_tpm": "0"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             },
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "type": "block",
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:             "vg_name": "ceph_vg2"
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:         }
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]:     ]
Jan 31 08:25:17 compute-0 admiring_rosalind[246912]: }
Jan 31 08:25:17 compute-0 systemd[1]: libpod-751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5.scope: Deactivated successfully.
Jan 31 08:25:17 compute-0 podman[246896]: 2026-01-31 08:25:17.701171499 +0000 UTC m=+0.798982780 container died 751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:25:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e3fd559ab24f39352292efd475e058ecdfeb10fb8d467595c88aed6897c0c41-merged.mount: Deactivated successfully.
Jan 31 08:25:18 compute-0 podman[246896]: 2026-01-31 08:25:18.301999553 +0000 UTC m=+1.399810804 container remove 751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rosalind, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:25:18 compute-0 sudo[246818]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:18 compute-0 systemd[1]: libpod-conmon-751598e15b4b9168ecd18dfba75afcdc2ff122c05b0eb861ed98e36e9bd783d5.scope: Deactivated successfully.
Jan 31 08:25:18 compute-0 sudo[246932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:18 compute-0 sudo[246932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:18 compute-0 sudo[246932]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:18 compute-0 sudo[246957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:25:18 compute-0 sudo[246957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:18 compute-0 podman[246995]: 2026-01-31 08:25:18.677341782 +0000 UTC m=+0.021578646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:18 compute-0 podman[246995]: 2026-01-31 08:25:18.905509352 +0000 UTC m=+0.249746196 container create 55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:18 compute-0 ceph-mon[75294]: pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:18 compute-0 systemd[1]: Started libpod-conmon-55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e.scope.
Jan 31 08:25:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:19 compute-0 podman[246995]: 2026-01-31 08:25:19.074023929 +0000 UTC m=+0.418260793 container init 55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:25:19 compute-0 podman[246995]: 2026-01-31 08:25:19.07993857 +0000 UTC m=+0.424175404 container start 55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:25:19 compute-0 infallible_black[247012]: 167 167
Jan 31 08:25:19 compute-0 systemd[1]: libpod-55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e.scope: Deactivated successfully.
Jan 31 08:25:19 compute-0 podman[246995]: 2026-01-31 08:25:19.101271029 +0000 UTC m=+0.445507873 container attach 55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:25:19 compute-0 podman[246995]: 2026-01-31 08:25:19.101780151 +0000 UTC m=+0.446016995 container died 55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-471cf11785cde455a969288cc3ab45ed4cde44e4d11d81cfcde2f7ce901a9c85-merged.mount: Deactivated successfully.
Jan 31 08:25:19 compute-0 podman[246995]: 2026-01-31 08:25:19.301013951 +0000 UTC m=+0.645250795 container remove 55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:25:19 compute-0 systemd[1]: libpod-conmon-55295a57aecd032f27fb3286126e88424e80a187ca5bdef28bf9cea7a112527e.scope: Deactivated successfully.
Jan 31 08:25:19 compute-0 podman[247036]: 2026-01-31 08:25:19.475992142 +0000 UTC m=+0.093399247 container create 4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_kalam, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:25:19 compute-0 podman[247036]: 2026-01-31 08:25:19.402233425 +0000 UTC m=+0.019640550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:19 compute-0 systemd[1]: Started libpod-conmon-4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689.scope.
Jan 31 08:25:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bfe6423703e810023d1d4d53bab2b7749d4ae6204495387bcddcb47784a082/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bfe6423703e810023d1d4d53bab2b7749d4ae6204495387bcddcb47784a082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bfe6423703e810023d1d4d53bab2b7749d4ae6204495387bcddcb47784a082/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bfe6423703e810023d1d4d53bab2b7749d4ae6204495387bcddcb47784a082/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:19 compute-0 podman[247036]: 2026-01-31 08:25:19.721197198 +0000 UTC m=+0.338604323 container init 4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:25:19 compute-0 podman[247036]: 2026-01-31 08:25:19.728029711 +0000 UTC m=+0.345436816 container start 4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:25:19 compute-0 podman[247036]: 2026-01-31 08:25:19.786225609 +0000 UTC m=+0.403632744 container attach 4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_kalam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:20 compute-0 lvm[247132]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:25:20 compute-0 lvm[247131]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:25:20 compute-0 lvm[247131]: VG ceph_vg1 finished
Jan 31 08:25:20 compute-0 lvm[247132]: VG ceph_vg0 finished
Jan 31 08:25:20 compute-0 lvm[247134]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:25:20 compute-0 lvm[247134]: VG ceph_vg2 finished
Jan 31 08:25:20 compute-0 brave_kalam[247053]: {}
Jan 31 08:25:20 compute-0 systemd[1]: libpod-4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689.scope: Deactivated successfully.
Jan 31 08:25:20 compute-0 systemd[1]: libpod-4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689.scope: Consumed 1.067s CPU time.
Jan 31 08:25:20 compute-0 podman[247036]: 2026-01-31 08:25:20.456668734 +0000 UTC m=+1.074075829 container died 4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_kalam, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2bfe6423703e810023d1d4d53bab2b7749d4ae6204495387bcddcb47784a082-merged.mount: Deactivated successfully.
Jan 31 08:25:21 compute-0 podman[247036]: 2026-01-31 08:25:21.004272989 +0000 UTC m=+1.621680134 container remove 4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:25:21 compute-0 systemd[1]: libpod-conmon-4f9a49027bc594753c7ab4aad5c925385bda1bf28a271f06e7a4e341264a9689.scope: Deactivated successfully.
Jan 31 08:25:21 compute-0 sudo[246957]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:25:21 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:25:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:25:21 compute-0 ceph-mon[75294]: pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:21 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:25:22 compute-0 sudo[247149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:25:22 compute-0 sudo[247149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:22 compute-0 sudo[247149]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:22 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:25:22 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:25:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:24 compute-0 ceph-mon[75294]: pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:25 compute-0 ceph-mon[75294]: pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:27 compute-0 ceph-mon[75294]: pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:29 compute-0 podman[247174]: 2026-01-31 08:25:29.173499055 +0000 UTC m=+0.045978496 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:25:29 compute-0 ceph-mon[75294]: pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:31 compute-0 ceph-mon[75294]: pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:32 compute-0 ceph-mon[75294]: pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:33 compute-0 podman[247195]: 2026-01-31 08:25:33.211048556 +0000 UTC m=+0.085284914 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:25:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:35 compute-0 ceph-mon[75294]: pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:37 compute-0 ceph-mon[75294]: pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:39 compute-0 ceph-mon[75294]: pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:25:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/242774980' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:25:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:25:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/242774980' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:25:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/242774980' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:25:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/242774980' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:25:41 compute-0 ceph-mon[75294]: pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:43 compute-0 ceph-mon[75294]: pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:45 compute-0 ceph-mon[75294]: pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:25:46.966 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:25:46.967 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:25:46.967 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:47 compute-0 ceph-mon[75294]: pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:49 compute-0 ceph-mon[75294]: pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:25:50
Jan 31 08:25:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:25:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:25:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Jan 31 08:25:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:25:51 compute-0 ceph-mon[75294]: pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:53 compute-0 ceph-mon[75294]: pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:54 compute-0 nova_compute[240062]: 2026-01-31 08:25:54.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:54 compute-0 nova_compute[240062]: 2026-01-31 08:25:54.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:25:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:55 compute-0 ceph-mon[75294]: pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:25:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:57 compute-0 nova_compute[240062]: 2026-01-31 08:25:57.167 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:57 compute-0 nova_compute[240062]: 2026-01-31 08:25:57.167 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:25:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:57 compute-0 ceph-mon[75294]: pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:58 compute-0 nova_compute[240062]: 2026-01-31 08:25:58.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:58 compute-0 ceph-mon[75294]: pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:59 compute-0 nova_compute[240062]: 2026-01-31 08:25:59.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:59 compute-0 nova_compute[240062]: 2026-01-31 08:25:59.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:59 compute-0 nova_compute[240062]: 2026-01-31 08:25:59.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:25:59 compute-0 nova_compute[240062]: 2026-01-31 08:25:59.200 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:25:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:00 compute-0 podman[247222]: 2026-01-31 08:26:00.162313754 +0000 UTC m=+0.038808906 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:26:01 compute-0 nova_compute[240062]: 2026-01-31 08:26:01.199 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:01 compute-0 nova_compute[240062]: 2026-01-31 08:26:01.199 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:26:01 compute-0 nova_compute[240062]: 2026-01-31 08:26:01.200 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:26:01 compute-0 ceph-mon[75294]: pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:01 compute-0 nova_compute[240062]: 2026-01-31 08:26:01.565 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:26:01 compute-0 nova_compute[240062]: 2026-01-31 08:26:01.566 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:01 compute-0 anacron[99328]: Job `cron.daily' started
Jan 31 08:26:01 compute-0 anacron[99328]: Job `cron.daily' terminated
Jan 31 08:26:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:02 compute-0 nova_compute[240062]: 2026-01-31 08:26:02.957 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:02 compute-0 nova_compute[240062]: 2026-01-31 08:26:02.958 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:02 compute-0 nova_compute[240062]: 2026-01-31 08:26:02.958 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:02 compute-0 nova_compute[240062]: 2026-01-31 08:26:02.958 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:26:02 compute-0 nova_compute[240062]: 2026-01-31 08:26:02.958 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:26:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/595796620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.514 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:03 compute-0 ceph-mon[75294]: pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.641 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.642 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.642 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.643 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.924 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.925 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:26:03 compute-0 nova_compute[240062]: 2026-01-31 08:26:03.994 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing inventories for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.045 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating ProviderTree inventory for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.046 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.059 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing aggregate associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.085 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing trait associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.098 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:04 compute-0 podman[247267]: 2026-01-31 08:26:04.188836952 +0000 UTC m=+0.056874667 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:26:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:26:04 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/920651784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.642 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.648 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/595796620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.750 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.751 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.752 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:04 compute-0 nova_compute[240062]: 2026-01-31 08:26:04.752 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:05 compute-0 ceph-mon[75294]: pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:05 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/920651784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:05 compute-0 nova_compute[240062]: 2026-01-31 08:26:05.867 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:05 compute-0 nova_compute[240062]: 2026-01-31 08:26:05.868 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:05 compute-0 nova_compute[240062]: 2026-01-31 08:26:05.950 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:05 compute-0 nova_compute[240062]: 2026-01-31 08:26:05.951 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:05 compute-0 nova_compute[240062]: 2026-01-31 08:26:05.951 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.496157195641518e-07 of space, bias 1.0, pg target 0.00019488471586924554 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.070672423406264e-06 of space, bias 4.0, pg target 0.002484806908087517 quantized to 16 (current 16)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:26:06 compute-0 ceph-mon[75294]: pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:09 compute-0 ceph-mon[75294]: pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:11 compute-0 ceph-mon[75294]: pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:13 compute-0 ceph-mon[75294]: pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:15 compute-0 ceph-mon[75294]: pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:17 compute-0 ceph-mon[75294]: pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:19 compute-0 ceph-mon[75294]: pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:21 compute-0 ceph-mon[75294]: pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:22 compute-0 sudo[247314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:22 compute-0 sudo[247314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:22 compute-0 sudo[247314]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:22 compute-0 sudo[247339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:26:22 compute-0 sudo[247339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:22 compute-0 sudo[247339]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:26:22 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:26:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:26:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:26:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:26:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:26:22 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:26:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:26:22 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:26:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:26:22 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:22 compute-0 sudo[247393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:22 compute-0 sudo[247393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:22 compute-0 sudo[247393]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:22 compute-0 sudo[247418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:26:22 compute-0 sudo[247418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:22 compute-0 podman[247454]: 2026-01-31 08:26:22.884891943 +0000 UTC m=+0.036742147 container create d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:26:22 compute-0 systemd[1]: Started libpod-conmon-d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a.scope.
Jan 31 08:26:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:22 compute-0 podman[247454]: 2026-01-31 08:26:22.957805551 +0000 UTC m=+0.109655785 container init d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:22 compute-0 podman[247454]: 2026-01-31 08:26:22.867492508 +0000 UTC m=+0.019342742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:22 compute-0 podman[247454]: 2026-01-31 08:26:22.964313197 +0000 UTC m=+0.116163411 container start d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:26:22 compute-0 podman[247454]: 2026-01-31 08:26:22.968299561 +0000 UTC m=+0.120149775 container attach d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mestorf, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:22 compute-0 naughty_mestorf[247470]: 167 167
Jan 31 08:26:22 compute-0 systemd[1]: libpod-d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a.scope: Deactivated successfully.
Jan 31 08:26:22 compute-0 podman[247454]: 2026-01-31 08:26:22.969707235 +0000 UTC m=+0.121557449 container died d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcb092ae5d6b03dc3d3820eab45ff450c1df783c464d6c4c42f1e202e77af310-merged.mount: Deactivated successfully.
Jan 31 08:26:23 compute-0 podman[247454]: 2026-01-31 08:26:23.019811659 +0000 UTC m=+0.171661883 container remove d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:26:23 compute-0 systemd[1]: libpod-conmon-d906f09e493524553798c9d7c0839b94017521ac6f40585cd4034445a28c3e1a.scope: Deactivated successfully.
Jan 31 08:26:23 compute-0 podman[247494]: 2026-01-31 08:26:23.148427436 +0000 UTC m=+0.039408100 container create 1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:26:23 compute-0 systemd[1]: Started libpod-conmon-1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680.scope.
Jan 31 08:26:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f45e1aa4e7756e3adabe59a7c9a1840d35ea5a67450c5c8ba61535c65af8c04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f45e1aa4e7756e3adabe59a7c9a1840d35ea5a67450c5c8ba61535c65af8c04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f45e1aa4e7756e3adabe59a7c9a1840d35ea5a67450c5c8ba61535c65af8c04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f45e1aa4e7756e3adabe59a7c9a1840d35ea5a67450c5c8ba61535c65af8c04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f45e1aa4e7756e3adabe59a7c9a1840d35ea5a67450c5c8ba61535c65af8c04/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:23 compute-0 podman[247494]: 2026-01-31 08:26:23.205091728 +0000 UTC m=+0.096072412 container init 1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:26:23 compute-0 podman[247494]: 2026-01-31 08:26:23.213088377 +0000 UTC m=+0.104069031 container start 1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:26:23 compute-0 podman[247494]: 2026-01-31 08:26:23.216445528 +0000 UTC m=+0.107426272 container attach 1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:26:23 compute-0 podman[247494]: 2026-01-31 08:26:23.130593241 +0000 UTC m=+0.021573935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:23 compute-0 ceph-mon[75294]: pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:26:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:26:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:26:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:26:23 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:23 compute-0 serene_khayyam[247509]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:26:23 compute-0 serene_khayyam[247509]: --> All data devices are unavailable
Jan 31 08:26:23 compute-0 systemd[1]: libpod-1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680.scope: Deactivated successfully.
Jan 31 08:26:23 compute-0 podman[247494]: 2026-01-31 08:26:23.622468298 +0000 UTC m=+0.513448962 container died 1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f45e1aa4e7756e3adabe59a7c9a1840d35ea5a67450c5c8ba61535c65af8c04-merged.mount: Deactivated successfully.
Jan 31 08:26:23 compute-0 podman[247494]: 2026-01-31 08:26:23.669490989 +0000 UTC m=+0.560471653 container remove 1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_khayyam, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:26:23 compute-0 systemd[1]: libpod-conmon-1b304d9515df9e55499c811722d7e168b330a859f6385d99d79f99f060c31680.scope: Deactivated successfully.
Jan 31 08:26:23 compute-0 sudo[247418]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:23 compute-0 sudo[247542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:23 compute-0 sudo[247542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:23 compute-0 sudo[247542]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:23 compute-0 sudo[247567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:26:23 compute-0 sudo[247567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:24 compute-0 podman[247604]: 2026-01-31 08:26:24.080758714 +0000 UTC m=+0.035425965 container create 3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:26:24 compute-0 systemd[1]: Started libpod-conmon-3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9.scope.
Jan 31 08:26:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:24 compute-0 podman[247604]: 2026-01-31 08:26:24.145756233 +0000 UTC m=+0.100423504 container init 3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wright, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:26:24 compute-0 podman[247604]: 2026-01-31 08:26:24.149879202 +0000 UTC m=+0.104546453 container start 3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wright, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:26:24 compute-0 clever_wright[247621]: 167 167
Jan 31 08:26:24 compute-0 systemd[1]: libpod-3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9.scope: Deactivated successfully.
Jan 31 08:26:24 compute-0 conmon[247621]: conmon 3c008f16e0c4cc5b281a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9.scope/container/memory.events
Jan 31 08:26:24 compute-0 podman[247604]: 2026-01-31 08:26:24.15777599 +0000 UTC m=+0.112443271 container attach 3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:26:24 compute-0 podman[247604]: 2026-01-31 08:26:24.158430476 +0000 UTC m=+0.113097727 container died 3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:24 compute-0 podman[247604]: 2026-01-31 08:26:24.066125566 +0000 UTC m=+0.020792847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd8c0d5fd14ce2e96eaff6e2dd0f2c69c300e1e8d5e13e6904a39dc6079edba-merged.mount: Deactivated successfully.
Jan 31 08:26:24 compute-0 podman[247604]: 2026-01-31 08:26:24.203423069 +0000 UTC m=+0.158090320 container remove 3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wright, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:24 compute-0 systemd[1]: libpod-conmon-3c008f16e0c4cc5b281ac4601edb87c8418acbd144b2adc6fef79de79916ade9.scope: Deactivated successfully.
Jan 31 08:26:24 compute-0 podman[247645]: 2026-01-31 08:26:24.328745557 +0000 UTC m=+0.045241220 container create 3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cori, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:26:24 compute-0 systemd[1]: Started libpod-conmon-3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6.scope.
Jan 31 08:26:24 compute-0 podman[247645]: 2026-01-31 08:26:24.304710354 +0000 UTC m=+0.021206027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f34c63fa7ecf7a2620b1a9f191bcfa975142f49a59c53b314b932b6594ac39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f34c63fa7ecf7a2620b1a9f191bcfa975142f49a59c53b314b932b6594ac39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f34c63fa7ecf7a2620b1a9f191bcfa975142f49a59c53b314b932b6594ac39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f34c63fa7ecf7a2620b1a9f191bcfa975142f49a59c53b314b932b6594ac39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:24 compute-0 podman[247645]: 2026-01-31 08:26:24.425981025 +0000 UTC m=+0.142476708 container init 3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cori, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:24 compute-0 podman[247645]: 2026-01-31 08:26:24.432421758 +0000 UTC m=+0.148917421 container start 3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:26:24 compute-0 podman[247645]: 2026-01-31 08:26:24.43793352 +0000 UTC m=+0.154429203 container attach 3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cori, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:26:24 compute-0 musing_cori[247662]: {
Jan 31 08:26:24 compute-0 musing_cori[247662]:     "0": [
Jan 31 08:26:24 compute-0 musing_cori[247662]:         {
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "devices": [
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "/dev/loop3"
Jan 31 08:26:24 compute-0 musing_cori[247662]:             ],
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_name": "ceph_lv0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_size": "21470642176",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "name": "ceph_lv0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "tags": {
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cluster_name": "ceph",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.crush_device_class": "",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.encrypted": "0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.objectstore": "bluestore",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osd_id": "0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.type": "block",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.vdo": "0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.with_tpm": "0"
Jan 31 08:26:24 compute-0 musing_cori[247662]:             },
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "type": "block",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "vg_name": "ceph_vg0"
Jan 31 08:26:24 compute-0 musing_cori[247662]:         }
Jan 31 08:26:24 compute-0 musing_cori[247662]:     ],
Jan 31 08:26:24 compute-0 musing_cori[247662]:     "1": [
Jan 31 08:26:24 compute-0 musing_cori[247662]:         {
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "devices": [
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "/dev/loop4"
Jan 31 08:26:24 compute-0 musing_cori[247662]:             ],
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_name": "ceph_lv1",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_size": "21470642176",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "name": "ceph_lv1",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "tags": {
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cluster_name": "ceph",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.crush_device_class": "",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.encrypted": "0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.objectstore": "bluestore",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osd_id": "1",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.type": "block",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.vdo": "0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.with_tpm": "0"
Jan 31 08:26:24 compute-0 musing_cori[247662]:             },
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "type": "block",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "vg_name": "ceph_vg1"
Jan 31 08:26:24 compute-0 musing_cori[247662]:         }
Jan 31 08:26:24 compute-0 musing_cori[247662]:     ],
Jan 31 08:26:24 compute-0 musing_cori[247662]:     "2": [
Jan 31 08:26:24 compute-0 musing_cori[247662]:         {
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "devices": [
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "/dev/loop5"
Jan 31 08:26:24 compute-0 musing_cori[247662]:             ],
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_name": "ceph_lv2",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_size": "21470642176",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "name": "ceph_lv2",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "tags": {
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.cluster_name": "ceph",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.crush_device_class": "",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.encrypted": "0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.objectstore": "bluestore",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osd_id": "2",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.type": "block",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.vdo": "0",
Jan 31 08:26:24 compute-0 musing_cori[247662]:                 "ceph.with_tpm": "0"
Jan 31 08:26:24 compute-0 musing_cori[247662]:             },
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "type": "block",
Jan 31 08:26:24 compute-0 musing_cori[247662]:             "vg_name": "ceph_vg2"
Jan 31 08:26:24 compute-0 musing_cori[247662]:         }
Jan 31 08:26:24 compute-0 musing_cori[247662]:     ]
Jan 31 08:26:24 compute-0 musing_cori[247662]: }
Jan 31 08:26:24 compute-0 systemd[1]: libpod-3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6.scope: Deactivated successfully.
Jan 31 08:26:24 compute-0 podman[247645]: 2026-01-31 08:26:24.713235184 +0000 UTC m=+0.429730857 container died 3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:26:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f34c63fa7ecf7a2620b1a9f191bcfa975142f49a59c53b314b932b6594ac39-merged.mount: Deactivated successfully.
Jan 31 08:26:24 compute-0 podman[247645]: 2026-01-31 08:26:24.824177128 +0000 UTC m=+0.540672791 container remove 3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:26:24 compute-0 systemd[1]: libpod-conmon-3ae379933245cdc1b441424875c006b81cbbb1fbdc2c5228794d98b36fe1e3a6.scope: Deactivated successfully.
Jan 31 08:26:24 compute-0 sudo[247567]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:24 compute-0 sudo[247685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:24 compute-0 sudo[247685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:24 compute-0 sudo[247685]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:24 compute-0 sudo[247710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:26:24 compute-0 sudo[247710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:25 compute-0 podman[247746]: 2026-01-31 08:26:25.226664274 +0000 UTC m=+0.044794239 container create 59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:26:25 compute-0 systemd[1]: Started libpod-conmon-59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1.scope.
Jan 31 08:26:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:25 compute-0 podman[247746]: 2026-01-31 08:26:25.202205691 +0000 UTC m=+0.020335696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:25 compute-0 podman[247746]: 2026-01-31 08:26:25.305025652 +0000 UTC m=+0.123155627 container init 59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:26:25 compute-0 podman[247746]: 2026-01-31 08:26:25.310433951 +0000 UTC m=+0.128563916 container start 59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cray, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:26:25 compute-0 elated_cray[247762]: 167 167
Jan 31 08:26:25 compute-0 systemd[1]: libpod-59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1.scope: Deactivated successfully.
Jan 31 08:26:25 compute-0 podman[247746]: 2026-01-31 08:26:25.315203645 +0000 UTC m=+0.133333630 container attach 59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:25 compute-0 conmon[247762]: conmon 59143fbe25c40524765e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1.scope/container/memory.events
Jan 31 08:26:25 compute-0 podman[247746]: 2026-01-31 08:26:25.315785509 +0000 UTC m=+0.133915494 container died 59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-210f3afdfb2b41e429420c79782e7f6b2dee7cc5f1916db7ff829fe5c7799f7c-merged.mount: Deactivated successfully.
Jan 31 08:26:25 compute-0 ceph-mon[75294]: pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:25 compute-0 podman[247746]: 2026-01-31 08:26:25.461182216 +0000 UTC m=+0.279312181 container remove 59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:26:25 compute-0 systemd[1]: libpod-conmon-59143fbe25c40524765e83ebea5fb858b34e9e9d669dde045d0c732f2e960de1.scope: Deactivated successfully.
Jan 31 08:26:25 compute-0 podman[247784]: 2026-01-31 08:26:25.610736791 +0000 UTC m=+0.061224891 container create 1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_germain, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:25 compute-0 podman[247784]: 2026-01-31 08:26:25.571247469 +0000 UTC m=+0.021735599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:25 compute-0 systemd[1]: Started libpod-conmon-1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca.scope.
Jan 31 08:26:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e470bd09d93fede2e2107c5e19f10993aee3454f09fd55b0c661507fdb94d19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e470bd09d93fede2e2107c5e19f10993aee3454f09fd55b0c661507fdb94d19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e470bd09d93fede2e2107c5e19f10993aee3454f09fd55b0c661507fdb94d19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e470bd09d93fede2e2107c5e19f10993aee3454f09fd55b0c661507fdb94d19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:25 compute-0 podman[247784]: 2026-01-31 08:26:25.707931318 +0000 UTC m=+0.158419448 container init 1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_germain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:25 compute-0 podman[247784]: 2026-01-31 08:26:25.715078178 +0000 UTC m=+0.165566278 container start 1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:25 compute-0 podman[247784]: 2026-01-31 08:26:25.719364651 +0000 UTC m=+0.169852751 container attach 1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 08:26:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:26 compute-0 lvm[247879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:26:26 compute-0 lvm[247880]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:26:26 compute-0 lvm[247879]: VG ceph_vg0 finished
Jan 31 08:26:26 compute-0 lvm[247880]: VG ceph_vg1 finished
Jan 31 08:26:26 compute-0 lvm[247882]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:26:26 compute-0 lvm[247882]: VG ceph_vg2 finished
Jan 31 08:26:26 compute-0 distracted_germain[247801]: {}
Jan 31 08:26:26 compute-0 systemd[1]: libpod-1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca.scope: Deactivated successfully.
Jan 31 08:26:26 compute-0 podman[247784]: 2026-01-31 08:26:26.40637807 +0000 UTC m=+0.856866180 container died 1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:26:26 compute-0 systemd[1]: libpod-1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca.scope: Consumed 1.015s CPU time.
Jan 31 08:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e470bd09d93fede2e2107c5e19f10993aee3454f09fd55b0c661507fdb94d19-merged.mount: Deactivated successfully.
Jan 31 08:26:26 compute-0 podman[247784]: 2026-01-31 08:26:26.461497104 +0000 UTC m=+0.911985204 container remove 1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:26:26 compute-0 systemd[1]: libpod-conmon-1f05b0b82fd9246320f6b9aa4876fedf3d5a7f904c698aa709e6819e706656ca.scope: Deactivated successfully.
Jan 31 08:26:26 compute-0 sudo[247710]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:26:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:26:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:26:26 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:26:26 compute-0 sudo[247899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:26:26 compute-0 sudo[247899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:26 compute-0 sudo[247899]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:27 compute-0 ceph-mon[75294]: pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:26:27 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:26:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:26:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 4621 writes, 20K keys, 4621 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4621 writes, 4621 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1342 writes, 6081 keys, 1342 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s
                                           Interval WAL: 1343 writes, 1343 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     45.3      0.51              0.05        11    0.046       0      0       0.0       0.0
                                             L6      1/0    7.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     72.9     60.4      1.24              0.14        10    0.124     44K   5294       0.0       0.0
                                            Sum      1/0    7.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     51.8     56.0      1.74              0.19        21    0.083     44K   5294       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6     37.3     37.2      1.26              0.09        10    0.126     24K   3098       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     72.9     60.4      1.24              0.14        10    0.124     44K   5294       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.8      0.45              0.05        10    0.045       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.022, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.05 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.7 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cc8bf858d0#2 capacity: 304.00 MB usage: 6.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(423,6.59 MB,2.16657%) FilterBlock(22,130.73 KB,0.0419968%) IndexBlock(22,250.66 KB,0.0805202%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:26:29 compute-0 ceph-mon[75294]: pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:31 compute-0 podman[247924]: 2026-01-31 08:26:31.185391009 +0000 UTC m=+0.053718622 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:26:31 compute-0 ceph-mon[75294]: pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:32 compute-0 ceph-mon[75294]: pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:34 compute-0 ceph-mon[75294]: pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:35 compute-0 podman[247943]: 2026-01-31 08:26:35.220509042 +0000 UTC m=+0.093625123 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:37 compute-0 ceph-mon[75294]: pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:26:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3022132548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:26:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:26:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3022132548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:26:39 compute-0 ceph-mon[75294]: pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3022132548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:26:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3022132548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:26:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:40 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 08:26:40 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:40.899400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:26:40 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 08:26:40 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848000899430, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1359, "num_deletes": 251, "total_data_size": 2155677, "memory_usage": 2202736, "flush_reason": "Manual Compaction"}
Jan 31 08:26:40 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848001132867, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2124502, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19809, "largest_seqno": 21167, "table_properties": {"data_size": 2118082, "index_size": 3619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13248, "raw_average_key_size": 19, "raw_value_size": 2105262, "raw_average_value_size": 3137, "num_data_blocks": 166, "num_entries": 671, "num_filter_entries": 671, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847858, "oldest_key_time": 1769847858, "file_creation_time": 1769848000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 233520 microseconds, and 3605 cpu microseconds.
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.132914) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2124502 bytes OK
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.132934) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.188332) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.188382) EVENT_LOG_v1 {"time_micros": 1769848001188372, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.188408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2149633, prev total WAL file size 2150794, number of live WAL files 2.
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.189797) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2074KB)], [47(7523KB)]
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848001189829, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9828962, "oldest_snapshot_seqno": -1}
Jan 31 08:26:41 compute-0 ceph-mon[75294]: pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4441 keys, 8053319 bytes, temperature: kUnknown
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848001347246, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8053319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8022199, "index_size": 18914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 109786, "raw_average_key_size": 24, "raw_value_size": 7940414, "raw_average_value_size": 1787, "num_data_blocks": 790, "num_entries": 4441, "num_filter_entries": 4441, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769848001, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.347435) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8053319 bytes
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.364712) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.4 rd, 51.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.3 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 4955, records dropped: 514 output_compression: NoCompression
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.364746) EVENT_LOG_v1 {"time_micros": 1769848001364733, "job": 24, "event": "compaction_finished", "compaction_time_micros": 157475, "compaction_time_cpu_micros": 14034, "output_level": 6, "num_output_files": 1, "total_output_size": 8053319, "num_input_records": 4955, "num_output_records": 4441, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848001365168, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848001366077, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.189725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.366133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.366140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.366142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.366145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:41 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:26:41.366147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:43 compute-0 ceph-mon[75294]: pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:45 compute-0 ceph-mon[75294]: pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:46 compute-0 nova_compute[240062]: 2026-01-31 08:26:46.426 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:26:46.967 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:26:46.967 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:26:46.967 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:47 compute-0 ceph-mon[75294]: pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:49 compute-0 ceph-mon[75294]: pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:26:50
Jan 31 08:26:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:26:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:26:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 31 08:26:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:26:50 compute-0 ceph-mon[75294]: pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:53 compute-0 ceph-mon[75294]: pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:55 compute-0 ceph-mon[75294]: pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:26:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:57 compute-0 ceph-mon[75294]: pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:58 compute-0 nova_compute[240062]: 2026-01-31 08:26:58.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:58 compute-0 nova_compute[240062]: 2026-01-31 08:26:58.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:26:59 compute-0 nova_compute[240062]: 2026-01-31 08:26:59.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:59 compute-0 nova_compute[240062]: 2026-01-31 08:26:59.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:59 compute-0 ceph-mon[75294]: pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:00 compute-0 ceph-mon[75294]: pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.237 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.237 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.274 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.275 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.275 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.275 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.275 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:27:01 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005483626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.776 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.896 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.897 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.897 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.897 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:01 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3005483626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.965 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.965 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:27:01 compute-0 nova_compute[240062]: 2026-01-31 08:27:01.986 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:02 compute-0 podman[248010]: 2026-01-31 08:27:02.16425507 +0000 UTC m=+0.038399287 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:27:02 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691736577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:02 compute-0 nova_compute[240062]: 2026-01-31 08:27:02.493 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:02 compute-0 nova_compute[240062]: 2026-01-31 08:27:02.498 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:27:02 compute-0 nova_compute[240062]: 2026-01-31 08:27:02.519 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:27:02 compute-0 nova_compute[240062]: 2026-01-31 08:27:02.521 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:27:02 compute-0 nova_compute[240062]: 2026-01-31 08:27:02.521 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:03 compute-0 ceph-mon[75294]: pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:03 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2691736577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:03 compute-0 nova_compute[240062]: 2026-01-31 08:27:03.516 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:04 compute-0 nova_compute[240062]: 2026-01-31 08:27:04.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:05 compute-0 nova_compute[240062]: 2026-01-31 08:27:05.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:05 compute-0 ceph-mon[75294]: pgmap v1043: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:06 compute-0 nova_compute[240062]: 2026-01-31 08:27:06.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:06 compute-0 podman[248033]: 2026-01-31 08:27:06.189895298 +0000 UTC m=+0.055065024 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.496157195641518e-07 of space, bias 1.0, pg target 0.00019488471586924554 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.070672423406264e-06 of space, bias 4.0, pg target 0.002484806908087517 quantized to 16 (current 16)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:27:07 compute-0 ceph-mon[75294]: pgmap v1044: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:09 compute-0 ceph-mon[75294]: pgmap v1045: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:11 compute-0 ceph-mon[75294]: pgmap v1046: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:12 compute-0 ceph-mon[75294]: pgmap v1047: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:15 compute-0 ceph-mon[75294]: pgmap v1048: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:17 compute-0 ceph-mon[75294]: pgmap v1049: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:19 compute-0 ceph-mon[75294]: pgmap v1050: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:21 compute-0 ceph-mon[75294]: pgmap v1051: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:22 compute-0 ceph-mon[75294]: pgmap v1052: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:25 compute-0 ceph-mon[75294]: pgmap v1053: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:26 compute-0 sudo[248059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:26 compute-0 sudo[248059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:26 compute-0 sudo[248059]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:26 compute-0 sudo[248084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:27:26 compute-0 sudo[248084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:27 compute-0 sudo[248084]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:27 compute-0 ceph-mon[75294]: pgmap v1054: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:27 compute-0 sudo[248141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:27 compute-0 sudo[248141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:27 compute-0 sudo[248141]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:27 compute-0 sudo[248166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 31 08:27:27 compute-0 sudo[248166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:27 compute-0 sudo[248166]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:27 compute-0 sudo[248208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:27 compute-0 sudo[248208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:27 compute-0 sudo[248208]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:27 compute-0 sudo[248233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:27:27 compute-0 sudo[248233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:27 compute-0 podman[248271]: 2026-01-31 08:27:27.780517509 +0000 UTC m=+0.037245046 container create 0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:27:27 compute-0 systemd[1]: Started libpod-conmon-0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab.scope.
Jan 31 08:27:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:27 compute-0 podman[248271]: 2026-01-31 08:27:27.858789785 +0000 UTC m=+0.115517352 container init 0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:27:27 compute-0 podman[248271]: 2026-01-31 08:27:27.763190707 +0000 UTC m=+0.019918264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:27 compute-0 podman[248271]: 2026-01-31 08:27:27.867436061 +0000 UTC m=+0.124163588 container start 0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:27:27 compute-0 podman[248271]: 2026-01-31 08:27:27.871634365 +0000 UTC m=+0.128361922 container attach 0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:27:27 compute-0 funny_cannon[248288]: 167 167
Jan 31 08:27:27 compute-0 systemd[1]: libpod-0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab.scope: Deactivated successfully.
Jan 31 08:27:27 compute-0 conmon[248288]: conmon 0d50e05756f012940abf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab.scope/container/memory.events
Jan 31 08:27:27 compute-0 podman[248271]: 2026-01-31 08:27:27.875575523 +0000 UTC m=+0.132303060 container died 0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cannon, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-de4736a0d92b0a43f74df09d6dcda15149b38341496030c6cb9c23003d7572e2-merged.mount: Deactivated successfully.
Jan 31 08:27:27 compute-0 podman[248271]: 2026-01-31 08:27:27.929861685 +0000 UTC m=+0.186589222 container remove 0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cannon, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:27:27 compute-0 systemd[1]: libpod-conmon-0d50e05756f012940abff401c3e96559230eb7e0da0fcffd212e725f3a75c5ab.scope: Deactivated successfully.
Jan 31 08:27:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:28 compute-0 podman[248313]: 2026-01-31 08:27:28.059477791 +0000 UTC m=+0.038236534 container create bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:27:28 compute-0 systemd[1]: Started libpod-conmon-bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310.scope.
Jan 31 08:27:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ad44b2563172ae36eec867b239b621b940cecc8d3ad0bc80d0913beb0cc1e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ad44b2563172ae36eec867b239b621b940cecc8d3ad0bc80d0913beb0cc1e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ad44b2563172ae36eec867b239b621b940cecc8d3ad0bc80d0913beb0cc1e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ad44b2563172ae36eec867b239b621b940cecc8d3ad0bc80d0913beb0cc1e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ad44b2563172ae36eec867b239b621b940cecc8d3ad0bc80d0913beb0cc1e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:28 compute-0 podman[248313]: 2026-01-31 08:27:28.123434406 +0000 UTC m=+0.102193149 container init bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:28 compute-0 podman[248313]: 2026-01-31 08:27:28.131137807 +0000 UTC m=+0.109896550 container start bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:27:28 compute-0 podman[248313]: 2026-01-31 08:27:28.135601859 +0000 UTC m=+0.114360622 container attach bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:27:28 compute-0 podman[248313]: 2026-01-31 08:27:28.042942471 +0000 UTC m=+0.021701244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:27:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:28 compute-0 bold_haibt[248330]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:27:28 compute-0 bold_haibt[248330]: --> All data devices are unavailable
Jan 31 08:27:28 compute-0 systemd[1]: libpod-bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310.scope: Deactivated successfully.
Jan 31 08:27:28 compute-0 podman[248313]: 2026-01-31 08:27:28.501999807 +0000 UTC m=+0.480758550 container died bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0ad44b2563172ae36eec867b239b621b940cecc8d3ad0bc80d0913beb0cc1e3-merged.mount: Deactivated successfully.
Jan 31 08:27:28 compute-0 podman[248313]: 2026-01-31 08:27:28.918735468 +0000 UTC m=+0.897494211 container remove bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 08:27:28 compute-0 sudo[248233]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:28 compute-0 systemd[1]: libpod-conmon-bb81dc1e2364b24fcceb3ff58fd1c48622cdf14e418412bfa24d45b88dae8310.scope: Deactivated successfully.
Jan 31 08:27:28 compute-0 sudo[248362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:28 compute-0 sudo[248362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:28 compute-0 sudo[248362]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:29 compute-0 sudo[248387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:27:29 compute-0 sudo[248387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:29 compute-0 podman[248424]: 2026-01-31 08:27:29.29074862 +0000 UTC m=+0.054460358 container create da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:27:29 compute-0 podman[248424]: 2026-01-31 08:27:29.252899587 +0000 UTC m=+0.016611305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:29 compute-0 systemd[1]: Started libpod-conmon-da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e.scope.
Jan 31 08:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:29 compute-0 podman[248424]: 2026-01-31 08:27:29.421033045 +0000 UTC m=+0.184744793 container init da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:29 compute-0 podman[248424]: 2026-01-31 08:27:29.425528987 +0000 UTC m=+0.189240705 container start da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:27:29 compute-0 podman[248424]: 2026-01-31 08:27:29.42891059 +0000 UTC m=+0.192622308 container attach da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:27:29 compute-0 kind_leavitt[248440]: 167 167
Jan 31 08:27:29 compute-0 systemd[1]: libpod-da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e.scope: Deactivated successfully.
Jan 31 08:27:29 compute-0 podman[248424]: 2026-01-31 08:27:29.430156693 +0000 UTC m=+0.193868411 container died da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c34dcbc07d2eb71547c00343bc663f840036a502fd723dc6d98484dc0bac40-merged.mount: Deactivated successfully.
Jan 31 08:27:29 compute-0 podman[248424]: 2026-01-31 08:27:29.465973161 +0000 UTC m=+0.229684879 container remove da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:27:29 compute-0 systemd[1]: libpod-conmon-da23b8eb65bb2975701a2cf1162416b802fe91f746c032b4755cd4668107e25e.scope: Deactivated successfully.
Jan 31 08:27:29 compute-0 ceph-mon[75294]: pgmap v1055: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:29 compute-0 podman[248464]: 2026-01-31 08:27:29.584916067 +0000 UTC m=+0.036867107 container create 30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:27:29 compute-0 systemd[1]: Started libpod-conmon-30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240.scope.
Jan 31 08:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b02b072c70c2dcc9a004f3faa588fdb2bfb014aa3e5f7fca2cc5f47c82a6d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b02b072c70c2dcc9a004f3faa588fdb2bfb014aa3e5f7fca2cc5f47c82a6d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b02b072c70c2dcc9a004f3faa588fdb2bfb014aa3e5f7fca2cc5f47c82a6d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b02b072c70c2dcc9a004f3faa588fdb2bfb014aa3e5f7fca2cc5f47c82a6d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:29 compute-0 podman[248464]: 2026-01-31 08:27:29.568007685 +0000 UTC m=+0.019958745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:29 compute-0 podman[248464]: 2026-01-31 08:27:29.673094333 +0000 UTC m=+0.125045403 container init 30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:27:29 compute-0 podman[248464]: 2026-01-31 08:27:29.677993006 +0000 UTC m=+0.129944046 container start 30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:27:29 compute-0 podman[248464]: 2026-01-31 08:27:29.681998785 +0000 UTC m=+0.133949855 container attach 30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:27:29 compute-0 amazing_ellis[248481]: {
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:     "0": [
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:         {
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "devices": [
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "/dev/loop3"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             ],
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_name": "ceph_lv0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_size": "21470642176",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "name": "ceph_lv0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "tags": {
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cluster_name": "ceph",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.crush_device_class": "",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.encrypted": "0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.objectstore": "bluestore",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osd_id": "0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.type": "block",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.vdo": "0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.with_tpm": "0"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             },
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "type": "block",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "vg_name": "ceph_vg0"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:         }
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:     ],
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:     "1": [
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:         {
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "devices": [
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "/dev/loop4"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             ],
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_name": "ceph_lv1",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_size": "21470642176",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "name": "ceph_lv1",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "tags": {
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cluster_name": "ceph",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.crush_device_class": "",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.encrypted": "0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.objectstore": "bluestore",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osd_id": "1",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.type": "block",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.vdo": "0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.with_tpm": "0"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             },
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "type": "block",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "vg_name": "ceph_vg1"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:         }
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:     ],
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:     "2": [
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:         {
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "devices": [
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "/dev/loop5"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             ],
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_name": "ceph_lv2",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_size": "21470642176",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "name": "ceph_lv2",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "tags": {
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.cluster_name": "ceph",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.crush_device_class": "",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.encrypted": "0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.objectstore": "bluestore",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osd_id": "2",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.type": "block",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.vdo": "0",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:                 "ceph.with_tpm": "0"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             },
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "type": "block",
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:             "vg_name": "ceph_vg2"
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:         }
Jan 31 08:27:29 compute-0 amazing_ellis[248481]:     ]
Jan 31 08:27:29 compute-0 amazing_ellis[248481]: }
Jan 31 08:27:29 compute-0 systemd[1]: libpod-30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240.scope: Deactivated successfully.
Jan 31 08:27:29 compute-0 podman[248464]: 2026-01-31 08:27:29.9335718 +0000 UTC m=+0.385522840 container died 30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b02b072c70c2dcc9a004f3faa588fdb2bfb014aa3e5f7fca2cc5f47c82a6d7-merged.mount: Deactivated successfully.
Jan 31 08:27:30 compute-0 podman[248464]: 2026-01-31 08:27:30.033041775 +0000 UTC m=+0.484992815 container remove 30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:27:30 compute-0 systemd[1]: libpod-conmon-30ba690d720c557269ce0f95c123fe05821d7bc7cd18edf3573c8058c1ea6240.scope: Deactivated successfully.
Jan 31 08:27:30 compute-0 sudo[248387]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:30 compute-0 sudo[248501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:30 compute-0 sudo[248501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:30 compute-0 sudo[248501]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:30 compute-0 sudo[248526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:27:30 compute-0 sudo[248526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:30 compute-0 podman[248562]: 2026-01-31 08:27:30.420841026 +0000 UTC m=+0.040606759 container create 283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:27:30 compute-0 systemd[1]: Started libpod-conmon-283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353.scope.
Jan 31 08:27:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:30 compute-0 podman[248562]: 2026-01-31 08:27:30.486354705 +0000 UTC m=+0.106120488 container init 283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_colden, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:30 compute-0 podman[248562]: 2026-01-31 08:27:30.491529825 +0000 UTC m=+0.111295558 container start 283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_colden, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:27:30 compute-0 podman[248562]: 2026-01-31 08:27:30.494543008 +0000 UTC m=+0.114308761 container attach 283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:27:30 compute-0 awesome_colden[248578]: 167 167
Jan 31 08:27:30 compute-0 systemd[1]: libpod-283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353.scope: Deactivated successfully.
Jan 31 08:27:30 compute-0 podman[248562]: 2026-01-31 08:27:30.497134908 +0000 UTC m=+0.116900671 container died 283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:27:30 compute-0 podman[248562]: 2026-01-31 08:27:30.402973939 +0000 UTC m=+0.022739732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-31153c5599172ee4988b49d9db416c55dfc08d6bcb44773bce663ad7fb52abc9-merged.mount: Deactivated successfully.
Jan 31 08:27:30 compute-0 podman[248562]: 2026-01-31 08:27:30.540352347 +0000 UTC m=+0.160118080 container remove 283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:27:30 compute-0 systemd[1]: libpod-conmon-283dfdf149440a1a41c274cf64e0dfe2613c968f3a763a3114a967380e120353.scope: Deactivated successfully.
Jan 31 08:27:30 compute-0 podman[248604]: 2026-01-31 08:27:30.644134069 +0000 UTC m=+0.030419110 container create fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:27:30 compute-0 systemd[1]: Started libpod-conmon-fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7.scope.
Jan 31 08:27:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d6ef73268e816531ac0c6e424860998b0a59d7f051b61488b78c6c524717e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d6ef73268e816531ac0c6e424860998b0a59d7f051b61488b78c6c524717e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d6ef73268e816531ac0c6e424860998b0a59d7f051b61488b78c6c524717e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4d6ef73268e816531ac0c6e424860998b0a59d7f051b61488b78c6c524717e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:30 compute-0 podman[248604]: 2026-01-31 08:27:30.699062039 +0000 UTC m=+0.085346980 container init fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:27:30 compute-0 podman[248604]: 2026-01-31 08:27:30.703667854 +0000 UTC m=+0.089952805 container start fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:27:30 compute-0 podman[248604]: 2026-01-31 08:27:30.706694017 +0000 UTC m=+0.092978958 container attach fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:27:30 compute-0 podman[248604]: 2026-01-31 08:27:30.629790868 +0000 UTC m=+0.016075829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:31 compute-0 lvm[248697]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:27:31 compute-0 lvm[248697]: VG ceph_vg0 finished
Jan 31 08:27:31 compute-0 lvm[248700]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:27:31 compute-0 lvm[248700]: VG ceph_vg1 finished
Jan 31 08:27:31 compute-0 lvm[248702]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:27:31 compute-0 lvm[248702]: VG ceph_vg2 finished
Jan 31 08:27:31 compute-0 epic_keller[248621]: {}
Jan 31 08:27:31 compute-0 podman[248604]: 2026-01-31 08:27:31.456697653 +0000 UTC m=+0.842982614 container died fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:27:31 compute-0 systemd[1]: libpod-fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7.scope: Deactivated successfully.
Jan 31 08:27:31 compute-0 systemd[1]: libpod-fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7.scope: Consumed 1.087s CPU time.
Jan 31 08:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4d6ef73268e816531ac0c6e424860998b0a59d7f051b61488b78c6c524717e6-merged.mount: Deactivated successfully.
Jan 31 08:27:31 compute-0 podman[248604]: 2026-01-31 08:27:31.494172055 +0000 UTC m=+0.880456996 container remove fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:31 compute-0 ceph-mon[75294]: pgmap v1056: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:31 compute-0 systemd[1]: libpod-conmon-fb7bea179985da6445873718bffcb1a0c2f2a2e933a6c39ae379c001a579efe7.scope: Deactivated successfully.
Jan 31 08:27:31 compute-0 sudo[248526]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:27:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:27:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:31 compute-0 sudo[248716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:27:31 compute-0 sudo[248716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:31 compute-0 sudo[248716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:27:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:33 compute-0 podman[248741]: 2026-01-31 08:27:33.177211721 +0000 UTC m=+0.043230861 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:27:33 compute-0 ceph-mon[75294]: pgmap v1057: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:35 compute-0 ceph-mon[75294]: pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:37 compute-0 podman[248761]: 2026-01-31 08:27:37.189525034 +0000 UTC m=+0.062254190 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:27:37 compute-0 ceph-mon[75294]: pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:27:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960605483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:27:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:27:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960605483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:27:39 compute-0 ceph-mon[75294]: pgmap v1060: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1960605483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:27:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1960605483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:27:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:41 compute-0 ceph-mon[75294]: pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:43 compute-0 ceph-mon[75294]: pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:45 compute-0 ceph-mon[75294]: pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:27:46.968 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:27:46.968 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:27:46.968 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:47 compute-0 ceph-mon[75294]: pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:48 compute-0 ceph-mon[75294]: pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:27:50
Jan 31 08:27:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:27:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:27:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes']
Jan 31 08:27:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:27:51 compute-0 ceph-mon[75294]: pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:53 compute-0 ceph-mon[75294]: pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:27:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:59 compute-0 ceph-mds[96942]: mds.beacon.cephfs.compute-0.xdvglw missed beacon ack from the monitors
Jan 31 08:27:59 compute-0 nova_compute[240062]: 2026-01-31 08:27:59.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:59 compute-0 nova_compute[240062]: 2026-01-31 08:27:59.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:27:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:00 compute-0 nova_compute[240062]: 2026-01-31 08:28:00.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:00 compute-0 nova_compute[240062]: 2026-01-31 08:28:00.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:02 compute-0 nova_compute[240062]: 2026-01-31 08:28:02.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:02 compute-0 nova_compute[240062]: 2026-01-31 08:28:02.290 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:02 compute-0 nova_compute[240062]: 2026-01-31 08:28:02.290 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:02 compute-0 nova_compute[240062]: 2026-01-31 08:28:02.290 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:02 compute-0 nova_compute[240062]: 2026-01-31 08:28:02.291 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:28:02 compute-0 nova_compute[240062]: 2026-01-31 08:28:02.291 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:03 compute-0 ceph-mds[96942]: mds.beacon.cephfs.compute-0.xdvglw missed beacon ack from the monitors
Jan 31 08:28:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:03 compute-0 ceph-mon[75294]: pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:03 compute-0 ceph-mon[75294]: pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:03 compute-0 ceph-mon[75294]: pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:03 compute-0 ceph-mon[75294]: pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:03 compute-0 ceph-mon[75294]: pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:28:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277276471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:03 compute-0 nova_compute[240062]: 2026-01-31 08:28:03.585 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:03 compute-0 nova_compute[240062]: 2026-01-31 08:28:03.716 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:28:03 compute-0 nova_compute[240062]: 2026-01-31 08:28:03.717 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:28:03 compute-0 nova_compute[240062]: 2026-01-31 08:28:03.717 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:03 compute-0 nova_compute[240062]: 2026-01-31 08:28:03.717 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:04 compute-0 podman[248809]: 2026-01-31 08:28:04.164460708 +0000 UTC m=+0.039965231 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.201 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.202 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.217 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4277276471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:28:04 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465915641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.774 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.779 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.984 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.985 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:28:04 compute-0 nova_compute[240062]: 2026-01-31 08:28:04.986 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:05 compute-0 ceph-mon[75294]: pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:05 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1465915641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:05 compute-0 nova_compute[240062]: 2026-01-31 08:28:05.986 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:05 compute-0 nova_compute[240062]: 2026-01-31 08:28:05.986 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:06 compute-0 nova_compute[240062]: 2026-01-31 08:28:06.044 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:06 compute-0 nova_compute[240062]: 2026-01-31 08:28:06.045 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:28:06 compute-0 nova_compute[240062]: 2026-01-31 08:28:06.045 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:28:06 compute-0 nova_compute[240062]: 2026-01-31 08:28:06.193 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:28:06 compute-0 nova_compute[240062]: 2026-01-31 08:28:06.194 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:06 compute-0 nova_compute[240062]: 2026-01-31 08:28:06.194 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:06 compute-0 nova_compute[240062]: 2026-01-31 08:28:06.195 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.496157195641518e-07 of space, bias 1.0, pg target 0.00019488471586924554 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.070672423406264e-06 of space, bias 4.0, pg target 0.002484806908087517 quantized to 16 (current 16)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:28:07 compute-0 ceph-mon[75294]: pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:08 compute-0 podman[248851]: 2026-01-31 08:28:08.187627518 +0000 UTC m=+0.059600506 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:28:09 compute-0 ceph-mon[75294]: pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:11 compute-0 ceph-mon[75294]: pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:12 compute-0 ceph-mon[75294]: pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:15 compute-0 ceph-mon[75294]: pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:17 compute-0 ceph-mon[75294]: pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:19 compute-0 ceph-mon[75294]: pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:21 compute-0 ceph-mon[75294]: pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:28:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6227 writes, 25K keys, 6227 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6227 writes, 1162 syncs, 5.36 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 376 writes, 883 keys, 376 commit groups, 1.0 writes per commit group, ingest: 0.48 MB, 0.00 MB/s
                                           Interval WAL: 376 writes, 165 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:28:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:23 compute-0 ceph-mon[75294]: pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:25 compute-0 ceph-mon[75294]: pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:28:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 7550 writes, 30K keys, 7550 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7550 writes, 1591 syncs, 4.75 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 336 writes, 1029 keys, 336 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s
                                           Interval WAL: 336 writes, 132 syncs, 2.55 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:28:25 compute-0 sshd-session[248878]: Invalid user solv from 193.32.162.145 port 35880
Jan 31 08:28:25 compute-0 sshd-session[248878]: Connection closed by invalid user solv 193.32.162.145 port 35880 [preauth]
Jan 31 08:28:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:26 compute-0 ceph-mon[75294]: pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:29 compute-0 ceph-mon[75294]: pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:28:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 6273 writes, 25K keys, 6273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6273 writes, 1120 syncs, 5.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 485 writes, 1161 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.55 MB, 0.00 MB/s
                                           Interval WAL: 485 writes, 208 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:28:31 compute-0 ceph-mon[75294]: pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:31 compute-0 sudo[248880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:31 compute-0 sudo[248880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:31 compute-0 sudo[248880]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:31 compute-0 sudo[248905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:28:31 compute-0 sudo[248905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:31 compute-0 sudo[248905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:28:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:28:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:32 compute-0 sudo[248949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:32 compute-0 sudo[248949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:32 compute-0 sudo[248949]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:32 compute-0 sudo[248974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:28:32 compute-0 sudo[248974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:32 compute-0 sudo[248974]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:28:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:28:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:28:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:28:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:28:32 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:28:32 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:28:32 compute-0 sudo[249030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:32 compute-0 sudo[249030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:32 compute-0 sudo[249030]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:32 compute-0 sudo[249055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:28:32 compute-0 sudo[249055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:32 compute-0 podman[249090]: 2026-01-31 08:28:32.869609658 +0000 UTC m=+0.037354391 container create 269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wright, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:28:32 compute-0 systemd[1]: Started libpod-conmon-269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f.scope.
Jan 31 08:28:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:32 compute-0 podman[249090]: 2026-01-31 08:28:32.928417593 +0000 UTC m=+0.096162336 container init 269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wright, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:28:32 compute-0 podman[249090]: 2026-01-31 08:28:32.93418421 +0000 UTC m=+0.101928953 container start 269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:28:32 compute-0 podman[249090]: 2026-01-31 08:28:32.937956142 +0000 UTC m=+0.105700875 container attach 269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:28:32 compute-0 nice_wright[249108]: 167 167
Jan 31 08:28:32 compute-0 systemd[1]: libpod-269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f.scope: Deactivated successfully.
Jan 31 08:28:32 compute-0 podman[249090]: 2026-01-31 08:28:32.939039672 +0000 UTC m=+0.106784405 container died 269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:28:32 compute-0 podman[249090]: 2026-01-31 08:28:32.853319203 +0000 UTC m=+0.021063956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d60007fb12df9aff5e42f560df11f093ba351600d09056b9085d9a95d548d36-merged.mount: Deactivated successfully.
Jan 31 08:28:32 compute-0 podman[249090]: 2026-01-31 08:28:32.972225548 +0000 UTC m=+0.139970281 container remove 269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:28:32 compute-0 systemd[1]: libpod-conmon-269165e16d87dfc589b5959fb2b8b08e0ddc9b61d5ffcd7e9644482aa57ad70f.scope: Deactivated successfully.
Jan 31 08:28:32 compute-0 ceph-mon[75294]: pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:28:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:28:33 compute-0 podman[249131]: 2026-01-31 08:28:33.07789434 +0000 UTC m=+0.032111947 container create db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermat, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:28:33 compute-0 systemd[1]: Started libpod-conmon-db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957.scope.
Jan 31 08:28:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969fe794e274fb649e63c6228442adde6dc485d791a1f90eb7661675b76dd6e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969fe794e274fb649e63c6228442adde6dc485d791a1f90eb7661675b76dd6e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969fe794e274fb649e63c6228442adde6dc485d791a1f90eb7661675b76dd6e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969fe794e274fb649e63c6228442adde6dc485d791a1f90eb7661675b76dd6e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969fe794e274fb649e63c6228442adde6dc485d791a1f90eb7661675b76dd6e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:33 compute-0 podman[249131]: 2026-01-31 08:28:33.138218596 +0000 UTC m=+0.092436243 container init db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:28:33 compute-0 podman[249131]: 2026-01-31 08:28:33.143968504 +0000 UTC m=+0.098186121 container start db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:28:33 compute-0 podman[249131]: 2026-01-31 08:28:33.152897728 +0000 UTC m=+0.107115375 container attach db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:28:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:33 compute-0 podman[249131]: 2026-01-31 08:28:33.062349946 +0000 UTC m=+0.016567633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:28:33 compute-0 nifty_fermat[249148]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:28:33 compute-0 nifty_fermat[249148]: --> All data devices are unavailable
Jan 31 08:28:33 compute-0 systemd[1]: libpod-db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957.scope: Deactivated successfully.
Jan 31 08:28:33 compute-0 podman[249131]: 2026-01-31 08:28:33.509470248 +0000 UTC m=+0.463687865 container died db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-969fe794e274fb649e63c6228442adde6dc485d791a1f90eb7661675b76dd6e1-merged.mount: Deactivated successfully.
Jan 31 08:28:33 compute-0 podman[249131]: 2026-01-31 08:28:33.689605972 +0000 UTC m=+0.643823599 container remove db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:28:33 compute-0 systemd[1]: libpod-conmon-db219e282684e183cdd176e5610131b6a7de70e0eeceae1ac3aba3b5da249957.scope: Deactivated successfully.
Jan 31 08:28:33 compute-0 sudo[249055]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:33 compute-0 sudo[249179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:33 compute-0 sudo[249179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:33 compute-0 sudo[249179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:33 compute-0 sudo[249204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:28:33 compute-0 sudo[249204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:34 compute-0 podman[249239]: 2026-01-31 08:28:34.090159692 +0000 UTC m=+0.062157556 container create 90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wiles, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:28:34 compute-0 podman[249239]: 2026-01-31 08:28:34.050310675 +0000 UTC m=+0.022308559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:28:34 compute-0 systemd[1]: Started libpod-conmon-90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789.scope.
Jan 31 08:28:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:34 compute-0 podman[249239]: 2026-01-31 08:28:34.215924594 +0000 UTC m=+0.187922468 container init 90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wiles, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:28:34 compute-0 podman[249239]: 2026-01-31 08:28:34.221810175 +0000 UTC m=+0.193808069 container start 90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:28:34 compute-0 friendly_wiles[249255]: 167 167
Jan 31 08:28:34 compute-0 systemd[1]: libpod-90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789.scope: Deactivated successfully.
Jan 31 08:28:34 compute-0 podman[249239]: 2026-01-31 08:28:34.228546339 +0000 UTC m=+0.200544193 container attach 90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:28:34 compute-0 podman[249239]: 2026-01-31 08:28:34.228871307 +0000 UTC m=+0.200869191 container died 90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-57cce6e30a888f7c59644203c21b22a507ddfc3393c5b6f79e4af21083fb95df-merged.mount: Deactivated successfully.
Jan 31 08:28:34 compute-0 podman[249239]: 2026-01-31 08:28:34.264313595 +0000 UTC m=+0.236311449 container remove 90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:28:34 compute-0 systemd[1]: libpod-conmon-90c8f62806f574e8e658ba025d1cfd8d6d1b4f05065e62174104d2a5e4297789.scope: Deactivated successfully.
Jan 31 08:28:34 compute-0 podman[249256]: 2026-01-31 08:28:34.292930216 +0000 UTC m=+0.106067966 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:28:34 compute-0 podman[249297]: 2026-01-31 08:28:34.376555908 +0000 UTC m=+0.029773563 container create 4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_grothendieck, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:28:34 compute-0 systemd[1]: Started libpod-conmon-4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067.scope.
Jan 31 08:28:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e880e232efadd8ca20a478f8ba526d573135396914cb569e119ae3b0e1b5b61f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e880e232efadd8ca20a478f8ba526d573135396914cb569e119ae3b0e1b5b61f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e880e232efadd8ca20a478f8ba526d573135396914cb569e119ae3b0e1b5b61f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e880e232efadd8ca20a478f8ba526d573135396914cb569e119ae3b0e1b5b61f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:34 compute-0 podman[249297]: 2026-01-31 08:28:34.452765277 +0000 UTC m=+0.105982992 container init 4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_grothendieck, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:28:34 compute-0 podman[249297]: 2026-01-31 08:28:34.458185545 +0000 UTC m=+0.111403200 container start 4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_grothendieck, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:28:34 compute-0 podman[249297]: 2026-01-31 08:28:34.364238591 +0000 UTC m=+0.017456256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:28:34 compute-0 podman[249297]: 2026-01-31 08:28:34.462626446 +0000 UTC m=+0.115844101 container attach 4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_grothendieck, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]: {
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:     "0": [
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:         {
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "devices": [
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "/dev/loop3"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             ],
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_name": "ceph_lv0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_size": "21470642176",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "name": "ceph_lv0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "tags": {
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cluster_name": "ceph",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.crush_device_class": "",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.encrypted": "0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.objectstore": "bluestore",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osd_id": "0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.type": "block",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.vdo": "0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.with_tpm": "0"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             },
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "type": "block",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "vg_name": "ceph_vg0"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:         }
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:     ],
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:     "1": [
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:         {
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "devices": [
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "/dev/loop4"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             ],
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_name": "ceph_lv1",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_size": "21470642176",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "name": "ceph_lv1",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "tags": {
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cluster_name": "ceph",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.crush_device_class": "",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.encrypted": "0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.objectstore": "bluestore",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osd_id": "1",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.type": "block",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.vdo": "0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.with_tpm": "0"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             },
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "type": "block",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "vg_name": "ceph_vg1"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:         }
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:     ],
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:     "2": [
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:         {
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "devices": [
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "/dev/loop5"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             ],
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_name": "ceph_lv2",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_size": "21470642176",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "name": "ceph_lv2",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "tags": {
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.cluster_name": "ceph",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.crush_device_class": "",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.encrypted": "0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.objectstore": "bluestore",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osd_id": "2",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.type": "block",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.vdo": "0",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:                 "ceph.with_tpm": "0"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             },
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "type": "block",
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:             "vg_name": "ceph_vg2"
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:         }
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]:     ]
Jan 31 08:28:34 compute-0 youthful_grothendieck[249313]: }
Jan 31 08:28:34 compute-0 systemd[1]: libpod-4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067.scope: Deactivated successfully.
Jan 31 08:28:34 compute-0 podman[249297]: 2026-01-31 08:28:34.732841019 +0000 UTC m=+0.386058674 container died 4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_grothendieck, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e880e232efadd8ca20a478f8ba526d573135396914cb569e119ae3b0e1b5b61f-merged.mount: Deactivated successfully.
Jan 31 08:28:34 compute-0 podman[249297]: 2026-01-31 08:28:34.78451781 +0000 UTC m=+0.437735465 container remove 4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:28:34 compute-0 systemd[1]: libpod-conmon-4b1adb098faaa56336caeb5f07ca9eaf566b198b49985f03e6346fd7415b7067.scope: Deactivated successfully.
Jan 31 08:28:34 compute-0 sudo[249204]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:34 compute-0 sudo[249336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:34 compute-0 sudo[249336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:34 compute-0 sudo[249336]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:34 compute-0 sudo[249361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:28:34 compute-0 sudo[249361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:35 compute-0 ceph-mon[75294]: pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:35 compute-0 podman[249398]: 2026-01-31 08:28:35.232052332 +0000 UTC m=+0.100488173 container create b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:28:35 compute-0 podman[249398]: 2026-01-31 08:28:35.152608924 +0000 UTC m=+0.021044775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:28:35 compute-0 systemd[1]: Started libpod-conmon-b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a.scope.
Jan 31 08:28:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:35 compute-0 podman[249398]: 2026-01-31 08:28:35.324278679 +0000 UTC m=+0.192714510 container init b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:28:35 compute-0 podman[249398]: 2026-01-31 08:28:35.330237311 +0000 UTC m=+0.198673152 container start b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:28:35 compute-0 serene_cartwright[249414]: 167 167
Jan 31 08:28:35 compute-0 systemd[1]: libpod-b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a.scope: Deactivated successfully.
Jan 31 08:28:35 compute-0 conmon[249414]: conmon b8532734b1ecc2025ca6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a.scope/container/memory.events
Jan 31 08:28:35 compute-0 podman[249398]: 2026-01-31 08:28:35.337606352 +0000 UTC m=+0.206042183 container attach b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:28:35 compute-0 podman[249398]: 2026-01-31 08:28:35.337973812 +0000 UTC m=+0.206409643 container died b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b328c5cbf1efcfc22c813befc11bfe357cc9b87a8f04a5b113a99884c4721e6e-merged.mount: Deactivated successfully.
Jan 31 08:28:35 compute-0 podman[249398]: 2026-01-31 08:28:35.4566678 +0000 UTC m=+0.325103631 container remove b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:28:35 compute-0 systemd[1]: libpod-conmon-b8532734b1ecc2025ca68b3c27367e4a230cdd84b645d89a6fe8f96f9fa3088a.scope: Deactivated successfully.
Jan 31 08:28:35 compute-0 podman[249438]: 2026-01-31 08:28:35.577874328 +0000 UTC m=+0.045367599 container create 1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:28:35 compute-0 podman[249438]: 2026-01-31 08:28:35.552471005 +0000 UTC m=+0.019964296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:28:35 compute-0 systemd[1]: Started libpod-conmon-1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9.scope.
Jan 31 08:28:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50adb14a9331e6c0bc4689e3f37949f20450c5447b52f5ff316ee5185de1ab85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50adb14a9331e6c0bc4689e3f37949f20450c5447b52f5ff316ee5185de1ab85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50adb14a9331e6c0bc4689e3f37949f20450c5447b52f5ff316ee5185de1ab85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50adb14a9331e6c0bc4689e3f37949f20450c5447b52f5ff316ee5185de1ab85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:35 compute-0 podman[249438]: 2026-01-31 08:28:35.838174331 +0000 UTC m=+0.305667602 container init 1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:28:35 compute-0 podman[249438]: 2026-01-31 08:28:35.843033653 +0000 UTC m=+0.310526914 container start 1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_clarke, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:28:35 compute-0 podman[249438]: 2026-01-31 08:28:35.855795421 +0000 UTC m=+0.323288702 container attach 1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:28:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:36 compute-0 lvm[249534]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:28:36 compute-0 lvm[249534]: VG ceph_vg0 finished
Jan 31 08:28:36 compute-0 lvm[249537]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:28:36 compute-0 lvm[249537]: VG ceph_vg1 finished
Jan 31 08:28:36 compute-0 lvm[249539]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:28:36 compute-0 lvm[249539]: VG ceph_vg2 finished
Jan 31 08:28:36 compute-0 funny_clarke[249458]: {}
Jan 31 08:28:36 compute-0 systemd[1]: libpod-1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9.scope: Deactivated successfully.
Jan 31 08:28:36 compute-0 systemd[1]: libpod-1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9.scope: Consumed 1.018s CPU time.
Jan 31 08:28:36 compute-0 podman[249542]: 2026-01-31 08:28:36.58100591 +0000 UTC m=+0.020622563 container died 1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-50adb14a9331e6c0bc4689e3f37949f20450c5447b52f5ff316ee5185de1ab85-merged.mount: Deactivated successfully.
Jan 31 08:28:36 compute-0 podman[249542]: 2026-01-31 08:28:36.652337807 +0000 UTC m=+0.091954440 container remove 1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_clarke, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:28:36 compute-0 systemd[1]: libpod-conmon-1159324f493295294ca2bca5d1de7028829900ed0eb7ec3eb4bedaa001b3bdc9.scope: Deactivated successfully.
Jan 31 08:28:36 compute-0 sudo[249361]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:28:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:28:36 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:36 compute-0 sudo[249557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:28:36 compute-0 sudo[249557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:36 compute-0 sudo[249557]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:37 compute-0 ceph-mon[75294]: pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:37 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:28:37 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Check health
Jan 31 08:28:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:38 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:38 compute-0 ceph-mon[75294]: pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:39 compute-0 podman[249582]: 2026-01-31 08:28:39.21263544 +0000 UTC m=+0.081256078 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:28:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:28:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1808688144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:28:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:28:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1808688144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:28:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1808688144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:28:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1808688144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:28:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:40 compute-0 ceph-mon[75294]: pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:42 compute-0 sshd-session[249608]: Invalid user sol from 80.94.92.182 port 57612
Jan 31 08:28:43 compute-0 sshd-session[249608]: Connection closed by invalid user sol 80.94.92.182 port 57612 [preauth]
Jan 31 08:28:43 compute-0 ceph-mon[75294]: pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:43 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:45 compute-0 ceph-mon[75294]: pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:45 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:46 compute-0 ceph-mon[75294]: pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:28:46.969 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:28:46.970 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:28:46.970 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:47 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:49 compute-0 ceph-mon[75294]: pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:49 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:28:50
Jan 31 08:28:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:28:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:28:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.control']
Jan 31 08:28:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:28:51 compute-0 ceph-mon[75294]: pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:51 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:53 compute-0 ceph-mon[75294]: pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:53 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:54 compute-0 ceph-mon[75294]: pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:28:55 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 31 08:28:57 compute-0 ceph-mon[75294]: pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 31 08:28:57 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 31 08:28:57 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 820 KiB/s wr, 6 op/s
Jan 31 08:28:58 compute-0 ceph-mon[75294]: osdmap e147: 3 total, 3 up, 3 in
Jan 31 08:28:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 31 08:28:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 31 08:28:59 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 31 08:28:59 compute-0 ceph-mon[75294]: pgmap v1101: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 820 KiB/s wr, 6 op/s
Jan 31 08:28:59 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 29 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 MiB/s wr, 36 op/s
Jan 31 08:29:00 compute-0 nova_compute[240062]: 2026-01-31 08:29:00.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:00 compute-0 ceph-mon[75294]: osdmap e148: 3 total, 3 up, 3 in
Jan 31 08:29:01 compute-0 nova_compute[240062]: 2026-01-31 08:29:01.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:01 compute-0 nova_compute[240062]: 2026-01-31 08:29:01.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:01 compute-0 nova_compute[240062]: 2026-01-31 08:29:01.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:29:01 compute-0 ceph-mon[75294]: pgmap v1103: 305 pgs: 305 active+clean; 29 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 MiB/s wr, 36 op/s
Jan 31 08:29:01 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 29 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 MiB/s wr, 36 op/s
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.307 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.308 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.308 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.308 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.309 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:29:02 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451685623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.870 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:02 compute-0 ceph-mon[75294]: pgmap v1104: 305 pgs: 305 active+clean; 29 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 MiB/s wr, 36 op/s
Jan 31 08:29:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3451685623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.996 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.997 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.998 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:02 compute-0 nova_compute[240062]: 2026-01-31 08:29:02.998 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:03 compute-0 nova_compute[240062]: 2026-01-31 08:29:03.920 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:29:03 compute-0 nova_compute[240062]: 2026-01-31 08:29:03.921 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:29:03 compute-0 nova_compute[240062]: 2026-01-31 08:29:03.937 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:03 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 08:29:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:29:04 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240254852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:04 compute-0 nova_compute[240062]: 2026-01-31 08:29:04.509 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:04 compute-0 nova_compute[240062]: 2026-01-31 08:29:04.514 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:29:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2240254852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:04 compute-0 nova_compute[240062]: 2026-01-31 08:29:04.593 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:29:04 compute-0 nova_compute[240062]: 2026-01-31 08:29:04.595 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:29:04 compute-0 nova_compute[240062]: 2026-01-31 08:29:04.596 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:05 compute-0 podman[249654]: 2026-01-31 08:29:05.186265524 +0000 UTC m=+0.046697186 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:29:05 compute-0 nova_compute[240062]: 2026-01-31 08:29:05.596 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:05 compute-0 nova_compute[240062]: 2026-01-31 08:29:05.597 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:05 compute-0 nova_compute[240062]: 2026-01-31 08:29:05.597 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:29:05 compute-0 nova_compute[240062]: 2026-01-31 08:29:05.597 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:29:05 compute-0 ceph-mon[75294]: pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 08:29:05 compute-0 nova_compute[240062]: 2026-01-31 08:29:05.779 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:29:05 compute-0 nova_compute[240062]: 2026-01-31 08:29:05.779 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:05 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664437211226647 of space, bias 1.0, pg target 0.19993311633679942 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.073404836632928e-06 of space, bias 4.0, pg target 0.0024880858039595137 quantized to 16 (current 16)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:29:07 compute-0 ceph-mon[75294]: pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Jan 31 08:29:07 compute-0 nova_compute[240062]: 2026-01-31 08:29:07.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:07 compute-0 nova_compute[240062]: 2026-01-31 08:29:07.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:07 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.3 MiB/s wr, 31 op/s
Jan 31 08:29:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:09 compute-0 ceph-mon[75294]: pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.3 MiB/s wr, 31 op/s
Jan 31 08:29:09 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.2 MiB/s wr, 8 op/s
Jan 31 08:29:10 compute-0 podman[249676]: 2026-01-31 08:29:10.183375351 +0000 UTC m=+0.061022877 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:29:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 31 08:29:11 compute-0 ceph-mon[75294]: pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.2 MiB/s wr, 8 op/s
Jan 31 08:29:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 31 08:29:11 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 31 08:29:11 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.2 MiB/s wr, 8 op/s
Jan 31 08:29:13 compute-0 ceph-mon[75294]: osdmap e149: 3 total, 3 up, 3 in
Jan 31 08:29:13 compute-0 ceph-mon[75294]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.2 MiB/s wr, 8 op/s
Jan 31 08:29:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:13 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:29:15 compute-0 ceph-mon[75294]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:29:15 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:29:17 compute-0 ceph-mon[75294]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:29:17 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 33 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Jan 31 08:29:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 31 08:29:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 31 08:29:18 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 31 08:29:19 compute-0 ceph-mon[75294]: pgmap v1113: 305 pgs: 305 active+clean; 33 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Jan 31 08:29:19 compute-0 ceph-mon[75294]: osdmap e150: 3 total, 3 up, 3 in
Jan 31 08:29:19 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 40 op/s
Jan 31 08:29:20 compute-0 ceph-mon[75294]: pgmap v1115: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 40 op/s
Jan 31 08:29:21 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Jan 31 08:29:23 compute-0 ceph-mon[75294]: pgmap v1116: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Jan 31 08:29:23 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:23 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 31 08:29:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:25 compute-0 ceph-mon[75294]: pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 31 08:29:25 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 31 08:29:26 compute-0 ceph-mon[75294]: pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 31 08:29:27 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Jan 31 08:29:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 31 08:29:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 31 08:29:29 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 31 08:29:29 compute-0 ceph-mon[75294]: pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Jan 31 08:29:29 compute-0 ceph-mon[75294]: osdmap e151: 3 total, 3 up, 3 in
Jan 31 08:29:29 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 921 B/s wr, 17 op/s
Jan 31 08:29:31 compute-0 ceph-mon[75294]: pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 921 B/s wr, 17 op/s
Jan 31 08:29:31 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 921 B/s wr, 17 op/s
Jan 31 08:29:32 compute-0 ceph-mon[75294]: pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 921 B/s wr, 17 op/s
Jan 31 08:29:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:33 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:35 compute-0 ceph-mon[75294]: pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:35 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:36 compute-0 podman[249703]: 2026-01-31 08:29:36.174370972 +0000 UTC m=+0.042635413 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:29:36 compute-0 sudo[249722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:36 compute-0 sudo[249722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:36 compute-0 sudo[249722]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:36 compute-0 sudo[249747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:29:36 compute-0 sudo[249747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:37 compute-0 sudo[249747]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:37 compute-0 sudo[249802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:37 compute-0 sudo[249802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:37 compute-0 sudo[249802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:37 compute-0 sudo[249827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:29:37 compute-0 sudo[249827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:37 compute-0 ceph-mon[75294]: pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:37 compute-0 podman[249864]: 2026-01-31 08:29:37.668394445 +0000 UTC m=+0.020626033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:37 compute-0 podman[249864]: 2026-01-31 08:29:37.806483985 +0000 UTC m=+0.158715543 container create 86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:29:37 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:38 compute-0 systemd[1]: Started libpod-conmon-86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc.scope.
Jan 31 08:29:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:38 compute-0 podman[249864]: 2026-01-31 08:29:38.20991603 +0000 UTC m=+0.562147598 container init 86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:38 compute-0 podman[249864]: 2026-01-31 08:29:38.215626206 +0000 UTC m=+0.567857794 container start 86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:29:38 compute-0 bold_shirley[249881]: 167 167
Jan 31 08:29:38 compute-0 systemd[1]: libpod-86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc.scope: Deactivated successfully.
Jan 31 08:29:39 compute-0 podman[249864]: 2026-01-31 08:29:39.020307238 +0000 UTC m=+1.372538816 container attach 86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:29:39 compute-0 podman[249864]: 2026-01-31 08:29:39.021177792 +0000 UTC m=+1.373409350 container died 86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:29:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612724056' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:29:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:29:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612724056' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:29:39 compute-0 ceph-mon[75294]: pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9cdb87253c8a4ca04606e8b6af9fe0c426d33821035c1db584a455ad57a2043-merged.mount: Deactivated successfully.
Jan 31 08:29:39 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:40 compute-0 podman[249864]: 2026-01-31 08:29:40.155903731 +0000 UTC m=+2.508135319 container remove 86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:29:40 compute-0 systemd[1]: libpod-conmon-86dad7b90ba6f4732ddeef3d03816022ad49990cb1fdc79c3e726cec131641dc.scope: Deactivated successfully.
Jan 31 08:29:40 compute-0 podman[249923]: 2026-01-31 08:29:40.338449772 +0000 UTC m=+0.078122248 container create 7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:40 compute-0 podman[249923]: 2026-01-31 08:29:40.283361131 +0000 UTC m=+0.023033627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:40 compute-0 podman[249898]: 2026-01-31 08:29:40.414958675 +0000 UTC m=+0.201769125 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 08:29:40 compute-0 systemd[1]: Started libpod-conmon-7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83.scope.
Jan 31 08:29:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be61fe953b04561ec07e448501abb57a9187db58a3e261daab75da59322e0f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be61fe953b04561ec07e448501abb57a9187db58a3e261daab75da59322e0f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be61fe953b04561ec07e448501abb57a9187db58a3e261daab75da59322e0f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be61fe953b04561ec07e448501abb57a9187db58a3e261daab75da59322e0f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:40 compute-0 podman[249923]: 2026-01-31 08:29:40.532134676 +0000 UTC m=+0.271807182 container init 7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:29:40 compute-0 podman[249923]: 2026-01-31 08:29:40.538079868 +0000 UTC m=+0.277752344 container start 7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_davinci, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:29:40 compute-0 podman[249923]: 2026-01-31 08:29:40.591228355 +0000 UTC m=+0.330900831 container attach 7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_davinci, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:29:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2612724056' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:29:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2612724056' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:29:41 compute-0 nifty_davinci[249947]: [
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:     {
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "available": false,
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "being_replaced": false,
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "ceph_device_lvm": false,
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "lsm_data": {},
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "lvs": [],
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "path": "/dev/sr0",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "rejected_reasons": [
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "Insufficient space (<5GB)",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "Has a FileSystem"
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         ],
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         "sys_api": {
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "actuators": null,
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "device_nodes": [
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:                 "sr0"
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             ],
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "devname": "sr0",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "human_readable_size": "482.00 KB",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "id_bus": "ata",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "model": "QEMU DVD-ROM",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "nr_requests": "2",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "parent": "/dev/sr0",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "partitions": {},
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "path": "/dev/sr0",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "removable": "1",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "rev": "2.5+",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "ro": "0",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "rotational": "1",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "sas_address": "",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "sas_device_handle": "",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "scheduler_mode": "mq-deadline",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "sectors": 0,
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "sectorsize": "2048",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "size": 493568.0,
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "support_discard": "2048",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "type": "disk",
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:             "vendor": "QEMU"
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:         }
Jan 31 08:29:41 compute-0 nifty_davinci[249947]:     }
Jan 31 08:29:41 compute-0 nifty_davinci[249947]: ]
Jan 31 08:29:41 compute-0 systemd[1]: libpod-7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83.scope: Deactivated successfully.
Jan 31 08:29:41 compute-0 podman[249923]: 2026-01-31 08:29:41.054965582 +0000 UTC m=+0.794638088 container died 7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4be61fe953b04561ec07e448501abb57a9187db58a3e261daab75da59322e0f9-merged.mount: Deactivated successfully.
Jan 31 08:29:41 compute-0 podman[249923]: 2026-01-31 08:29:41.751709346 +0000 UTC m=+1.491381822 container remove 7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_davinci, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:29:41 compute-0 ceph-mon[75294]: pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:41 compute-0 sudo[249827]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:41 compute-0 systemd[1]: libpod-conmon-7f7f17f94f4e3ae8bc49f7c24a6d70862482de549e1ecfcdc4887aa11505ed83.scope: Deactivated successfully.
Jan 31 08:29:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:29:41 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:29:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:29:42 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:42 compute-0 sudo[250756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:42 compute-0 sudo[250756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:42 compute-0 sudo[250756]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:42 compute-0 sudo[250781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:29:42 compute-0 sudo[250781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:42 compute-0 podman[250817]: 2026-01-31 08:29:42.592746459 +0000 UTC m=+0.068271291 container create 10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:29:42 compute-0 podman[250817]: 2026-01-31 08:29:42.543433825 +0000 UTC m=+0.018958697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:42 compute-0 systemd[1]: Started libpod-conmon-10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc.scope.
Jan 31 08:29:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:42 compute-0 podman[250817]: 2026-01-31 08:29:42.777028596 +0000 UTC m=+0.252553458 container init 10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mirzakhani, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:42 compute-0 podman[250817]: 2026-01-31 08:29:42.782838054 +0000 UTC m=+0.258362886 container start 10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:29:42 compute-0 nice_mirzakhani[250833]: 167 167
Jan 31 08:29:42 compute-0 systemd[1]: libpod-10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc.scope: Deactivated successfully.
Jan 31 08:29:42 compute-0 podman[250817]: 2026-01-31 08:29:42.945734651 +0000 UTC m=+0.421259483 container attach 10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:29:42 compute-0 podman[250817]: 2026-01-31 08:29:42.949178514 +0000 UTC m=+0.424703346 container died 10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mirzakhani, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:29:43 compute-0 ceph-mon[75294]: pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:29:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-acd40734967a9770f37899f6bbaa635db0c8c017b94d5df65353ffcc743ca568-merged.mount: Deactivated successfully.
Jan 31 08:29:43 compute-0 podman[250817]: 2026-01-31 08:29:43.424369603 +0000 UTC m=+0.899894435 container remove 10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:29:43 compute-0 systemd[1]: libpod-conmon-10581de6a091fea95a20557a22a7d0b93f06b572837ef6c5b058c096995b39dc.scope: Deactivated successfully.
Jan 31 08:29:43 compute-0 podman[250855]: 2026-01-31 08:29:43.606169583 +0000 UTC m=+0.099956712 container create 773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:29:43 compute-0 podman[250855]: 2026-01-31 08:29:43.523203724 +0000 UTC m=+0.016990873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:43 compute-0 systemd[1]: Started libpod-conmon-773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19.scope.
Jan 31 08:29:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df56a1f61f3352bd8c5564a9a896a5a4af11161ca318bf84fd6cb24dad3faebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df56a1f61f3352bd8c5564a9a896a5a4af11161ca318bf84fd6cb24dad3faebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df56a1f61f3352bd8c5564a9a896a5a4af11161ca318bf84fd6cb24dad3faebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df56a1f61f3352bd8c5564a9a896a5a4af11161ca318bf84fd6cb24dad3faebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df56a1f61f3352bd8c5564a9a896a5a4af11161ca318bf84fd6cb24dad3faebc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:43 compute-0 podman[250855]: 2026-01-31 08:29:43.948814474 +0000 UTC m=+0.442601623 container init 773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_engelbart, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:29:43 compute-0 podman[250855]: 2026-01-31 08:29:43.953946254 +0000 UTC m=+0.447733393 container start 773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:29:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:44 compute-0 podman[250855]: 2026-01-31 08:29:44.009595599 +0000 UTC m=+0.503382728 container attach 773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_engelbart, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:29:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:44 compute-0 gracious_engelbart[250872]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:29:44 compute-0 gracious_engelbart[250872]: --> All data devices are unavailable
Jan 31 08:29:44 compute-0 systemd[1]: libpod-773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19.scope: Deactivated successfully.
Jan 31 08:29:44 compute-0 podman[250855]: 2026-01-31 08:29:44.350111501 +0000 UTC m=+0.843898640 container died 773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-df56a1f61f3352bd8c5564a9a896a5a4af11161ca318bf84fd6cb24dad3faebc-merged.mount: Deactivated successfully.
Jan 31 08:29:45 compute-0 podman[250855]: 2026-01-31 08:29:45.005278113 +0000 UTC m=+1.499065242 container remove 773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:29:45 compute-0 systemd[1]: libpod-conmon-773effad0a877c1dd340f1bc65fec38fc41f776aa4568b2dd7474150e98dfa19.scope: Deactivated successfully.
Jan 31 08:29:45 compute-0 sudo[250781]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:45 compute-0 sudo[250905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:45 compute-0 sudo[250905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:45 compute-0 sudo[250905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:45 compute-0 sudo[250930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:29:45 compute-0 sudo[250930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:45 compute-0 ceph-mon[75294]: pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:45 compute-0 podman[250966]: 2026-01-31 08:29:45.433377449 +0000 UTC m=+0.095215783 container create 9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_cerf, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:29:45 compute-0 podman[250966]: 2026-01-31 08:29:45.360389162 +0000 UTC m=+0.022227526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:45 compute-0 systemd[1]: Started libpod-conmon-9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e.scope.
Jan 31 08:29:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:45 compute-0 podman[250966]: 2026-01-31 08:29:45.588148534 +0000 UTC m=+0.249986878 container init 9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:29:45 compute-0 podman[250966]: 2026-01-31 08:29:45.592625275 +0000 UTC m=+0.254463609 container start 9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:45 compute-0 distracted_cerf[250982]: 167 167
Jan 31 08:29:45 compute-0 systemd[1]: libpod-9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e.scope: Deactivated successfully.
Jan 31 08:29:45 compute-0 podman[250966]: 2026-01-31 08:29:45.635913254 +0000 UTC m=+0.297751598 container attach 9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:29:45 compute-0 podman[250966]: 2026-01-31 08:29:45.636447409 +0000 UTC m=+0.298285733 container died 9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_cerf, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:29:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ccd3fa6c9debf71f421ce979630f42e53edc2be9592fb2785cbb57aef338ac8-merged.mount: Deactivated successfully.
Jan 31 08:29:45 compute-0 podman[250966]: 2026-01-31 08:29:45.959592648 +0000 UTC m=+0.621430982 container remove 9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:29:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:46 compute-0 systemd[1]: libpod-conmon-9b48af6581a240281c9a8681e7d1eb63bcdedfc5e25a07e7fd55b287ca96072e.scope: Deactivated successfully.
Jan 31 08:29:46 compute-0 podman[251007]: 2026-01-31 08:29:46.072057661 +0000 UTC m=+0.026877813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:46 compute-0 podman[251007]: 2026-01-31 08:29:46.259454675 +0000 UTC m=+0.214274807 container create 5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:29:46 compute-0 systemd[1]: Started libpod-conmon-5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4.scope.
Jan 31 08:29:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3100b785bf927600d7994f0ebf665884507c8fc4d7fe68096126ec4974b20649/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3100b785bf927600d7994f0ebf665884507c8fc4d7fe68096126ec4974b20649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3100b785bf927600d7994f0ebf665884507c8fc4d7fe68096126ec4974b20649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3100b785bf927600d7994f0ebf665884507c8fc4d7fe68096126ec4974b20649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:46 compute-0 podman[251007]: 2026-01-31 08:29:46.646150264 +0000 UTC m=+0.600970416 container init 5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:46 compute-0 podman[251007]: 2026-01-31 08:29:46.652880937 +0000 UTC m=+0.607701069 container start 5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:46 compute-0 podman[251007]: 2026-01-31 08:29:46.8123578 +0000 UTC m=+0.767177962 container attach 5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:29:46 compute-0 happy_spence[251024]: {
Jan 31 08:29:46 compute-0 happy_spence[251024]:     "0": [
Jan 31 08:29:46 compute-0 happy_spence[251024]:         {
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "devices": [
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "/dev/loop3"
Jan 31 08:29:46 compute-0 happy_spence[251024]:             ],
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_name": "ceph_lv0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_size": "21470642176",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "name": "ceph_lv0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "tags": {
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cluster_name": "ceph",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.crush_device_class": "",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.encrypted": "0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.objectstore": "bluestore",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osd_id": "0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.type": "block",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.vdo": "0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.with_tpm": "0"
Jan 31 08:29:46 compute-0 happy_spence[251024]:             },
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "type": "block",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "vg_name": "ceph_vg0"
Jan 31 08:29:46 compute-0 happy_spence[251024]:         }
Jan 31 08:29:46 compute-0 happy_spence[251024]:     ],
Jan 31 08:29:46 compute-0 happy_spence[251024]:     "1": [
Jan 31 08:29:46 compute-0 happy_spence[251024]:         {
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "devices": [
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "/dev/loop4"
Jan 31 08:29:46 compute-0 happy_spence[251024]:             ],
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_name": "ceph_lv1",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_size": "21470642176",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "name": "ceph_lv1",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "tags": {
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cluster_name": "ceph",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.crush_device_class": "",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.encrypted": "0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.objectstore": "bluestore",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osd_id": "1",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.type": "block",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.vdo": "0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.with_tpm": "0"
Jan 31 08:29:46 compute-0 happy_spence[251024]:             },
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "type": "block",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "vg_name": "ceph_vg1"
Jan 31 08:29:46 compute-0 happy_spence[251024]:         }
Jan 31 08:29:46 compute-0 happy_spence[251024]:     ],
Jan 31 08:29:46 compute-0 happy_spence[251024]:     "2": [
Jan 31 08:29:46 compute-0 happy_spence[251024]:         {
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "devices": [
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "/dev/loop5"
Jan 31 08:29:46 compute-0 happy_spence[251024]:             ],
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_name": "ceph_lv2",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_size": "21470642176",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "name": "ceph_lv2",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "tags": {
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.cluster_name": "ceph",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.crush_device_class": "",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.encrypted": "0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.objectstore": "bluestore",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osd_id": "2",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.type": "block",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.vdo": "0",
Jan 31 08:29:46 compute-0 happy_spence[251024]:                 "ceph.with_tpm": "0"
Jan 31 08:29:46 compute-0 happy_spence[251024]:             },
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "type": "block",
Jan 31 08:29:46 compute-0 happy_spence[251024]:             "vg_name": "ceph_vg2"
Jan 31 08:29:46 compute-0 happy_spence[251024]:         }
Jan 31 08:29:46 compute-0 happy_spence[251024]:     ]
Jan 31 08:29:46 compute-0 happy_spence[251024]: }
Jan 31 08:29:46 compute-0 podman[251007]: 2026-01-31 08:29:46.919543149 +0000 UTC m=+0.874363271 container died 5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:29:46 compute-0 systemd[1]: libpod-5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4.scope: Deactivated successfully.
Jan 31 08:29:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:29:46.971 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:29:46.972 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:29:46.973 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3100b785bf927600d7994f0ebf665884507c8fc4d7fe68096126ec4974b20649-merged.mount: Deactivated successfully.
Jan 31 08:29:47 compute-0 podman[251007]: 2026-01-31 08:29:47.327344143 +0000 UTC m=+1.282164275 container remove 5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:29:47 compute-0 systemd[1]: libpod-conmon-5fb52dd692d8e20be534cc9bd0bb28f52e85d3cd281ca045180ada302c140fe4.scope: Deactivated successfully.
Jan 31 08:29:47 compute-0 sudo[250930]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:47 compute-0 sudo[251047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:47 compute-0 sudo[251047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:47 compute-0 sudo[251047]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:47 compute-0 sudo[251072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:29:47 compute-0 sudo[251072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:47 compute-0 ceph-mon[75294]: pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:47 compute-0 podman[251109]: 2026-01-31 08:29:47.755494472 +0000 UTC m=+0.092224342 container create ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:29:47 compute-0 podman[251109]: 2026-01-31 08:29:47.681643831 +0000 UTC m=+0.018373701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:47 compute-0 systemd[1]: Started libpod-conmon-ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a.scope.
Jan 31 08:29:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:47 compute-0 podman[251109]: 2026-01-31 08:29:47.975422461 +0000 UTC m=+0.312152341 container init ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:29:47 compute-0 podman[251109]: 2026-01-31 08:29:47.980236832 +0000 UTC m=+0.316966692 container start ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_brown, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:29:47 compute-0 stupefied_brown[251125]: 167 167
Jan 31 08:29:47 compute-0 systemd[1]: libpod-ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a.scope: Deactivated successfully.
Jan 31 08:29:47 compute-0 conmon[251125]: conmon ea63d0bd18862fde8ec5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a.scope/container/memory.events
Jan 31 08:29:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:48 compute-0 podman[251109]: 2026-01-31 08:29:48.029110912 +0000 UTC m=+0.365840802 container attach ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_brown, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:29:48 compute-0 podman[251109]: 2026-01-31 08:29:48.029540715 +0000 UTC m=+0.366270575 container died ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-65119d58b3dab0f29400176fb2f1b03f32d96b9148c511d8a7817e1e965af6fe-merged.mount: Deactivated successfully.
Jan 31 08:29:48 compute-0 podman[251109]: 2026-01-31 08:29:48.33636131 +0000 UTC m=+0.673091170 container remove ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:29:48 compute-0 systemd[1]: libpod-conmon-ea63d0bd18862fde8ec5965970d067f190836ee76e3b79c12ad5f8a10ae15f1a.scope: Deactivated successfully.
Jan 31 08:29:48 compute-0 podman[251148]: 2026-01-31 08:29:48.441337498 +0000 UTC m=+0.022742340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:48 compute-0 podman[251148]: 2026-01-31 08:29:48.537230359 +0000 UTC m=+0.118635171 container create 2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:29:48 compute-0 systemd[1]: Started libpod-conmon-2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3.scope.
Jan 31 08:29:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c9ce17757101109cada57fa2c70721d3fd21ab3193b7c4069cee19a04c346b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c9ce17757101109cada57fa2c70721d3fd21ab3193b7c4069cee19a04c346b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c9ce17757101109cada57fa2c70721d3fd21ab3193b7c4069cee19a04c346b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c9ce17757101109cada57fa2c70721d3fd21ab3193b7c4069cee19a04c346b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:48 compute-0 podman[251148]: 2026-01-31 08:29:48.789857019 +0000 UTC m=+0.371261881 container init 2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:29:48 compute-0 podman[251148]: 2026-01-31 08:29:48.795596605 +0000 UTC m=+0.377001437 container start 2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:29:48 compute-0 podman[251148]: 2026-01-31 08:29:48.861486409 +0000 UTC m=+0.442891241 container attach 2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:29:48 compute-0 ceph-mon[75294]: pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:49 compute-0 lvm[251244]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:29:49 compute-0 lvm[251243]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:29:49 compute-0 lvm[251243]: VG ceph_vg0 finished
Jan 31 08:29:49 compute-0 lvm[251244]: VG ceph_vg1 finished
Jan 31 08:29:49 compute-0 lvm[251246]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:29:49 compute-0 lvm[251246]: VG ceph_vg2 finished
Jan 31 08:29:49 compute-0 laughing_galileo[251165]: {}
Jan 31 08:29:49 compute-0 systemd[1]: libpod-2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3.scope: Deactivated successfully.
Jan 31 08:29:49 compute-0 podman[251148]: 2026-01-31 08:29:49.513095962 +0000 UTC m=+1.094500774 container died 2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:29:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c9ce17757101109cada57fa2c70721d3fd21ab3193b7c4069cee19a04c346b-merged.mount: Deactivated successfully.
Jan 31 08:29:49 compute-0 podman[251148]: 2026-01-31 08:29:49.704098834 +0000 UTC m=+1.285503646 container remove 2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:29:49 compute-0 systemd[1]: libpod-conmon-2f97061d217ae4f1290f6e4b301b5f548adb10493b05cee01eb19f92f31315a3.scope: Deactivated successfully.
Jan 31 08:29:49 compute-0 sudo[251072]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:29:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:29:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:49 compute-0 sudo[251261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:29:49 compute-0 sudo[251261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:49 compute-0 sudo[251261]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:50 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:29:50 compute-0 ceph-mon[75294]: pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:29:50
Jan 31 08:29:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:29:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:29:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['images', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Jan 31 08:29:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:29:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:53 compute-0 ceph-mon[75294]: pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:55 compute-0 ceph-mon[75294]: pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:29:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:29:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:57 compute-0 ceph-mon[75294]: pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:59 compute-0 ceph-mon[75294]: pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:01 compute-0 nova_compute[240062]: 2026-01-31 08:30:01.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:01 compute-0 ceph-mon[75294]: pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.326 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.327 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.327 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.327 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.328 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:03 compute-0 ceph-mon[75294]: pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:30:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2404586676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.852 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.970 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.972 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.972 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:03 compute-0 nova_compute[240062]: 2026-01-31 08:30:03.972 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.165 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.166 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.185 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:30:04 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207376820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.755 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.760 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.779 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.781 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:30:04 compute-0 nova_compute[240062]: 2026-01-31 08:30:04.782 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2404586676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:06 compute-0 ceph-mon[75294]: pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2207376820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:30:06 compute-0 nova_compute[240062]: 2026-01-31 08:30:06.777 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:06 compute-0 nova_compute[240062]: 2026-01-31 08:30:06.778 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:06 compute-0 nova_compute[240062]: 2026-01-31 08:30:06.778 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:30:06 compute-0 nova_compute[240062]: 2026-01-31 08:30:06.778 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:30:07 compute-0 podman[251330]: 2026-01-31 08:30:07.170275354 +0000 UTC m=+0.043462884 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:30:07 compute-0 nova_compute[240062]: 2026-01-31 08:30:07.258 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:30:07 compute-0 nova_compute[240062]: 2026-01-31 08:30:07.259 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:07 compute-0 ceph-mon[75294]: pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:08 compute-0 nova_compute[240062]: 2026-01-31 08:30:08.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:09 compute-0 nova_compute[240062]: 2026-01-31 08:30:09.149 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:09 compute-0 nova_compute[240062]: 2026-01-31 08:30:09.254 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:09 compute-0 ceph-mon[75294]: pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:11 compute-0 podman[251349]: 2026-01-31 08:30:11.185504712 +0000 UTC m=+0.060513589 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 31 08:30:11 compute-0 ceph-mon[75294]: pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:13 compute-0 ceph-mon[75294]: pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 08:30:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:15 compute-0 ceph-mon[75294]: pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 08:30:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 08:30:16 compute-0 ceph-mon[75294]: pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 08:30:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Jan 31 08:30:19 compute-0 ceph-mon[75294]: pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Jan 31 08:30:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:30:21 compute-0 ceph-mon[75294]: pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:30:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:30:23 compute-0 ceph-mon[75294]: pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:30:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:30:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:25 compute-0 ceph-mon[75294]: pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:30:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:30:26 compute-0 ceph-mon[75294]: pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:30:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:28.388686) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848228388780, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2075, "num_deletes": 253, "total_data_size": 3517220, "memory_usage": 3577840, "flush_reason": "Manual Compaction"}
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848228876727, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3427295, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21168, "largest_seqno": 23242, "table_properties": {"data_size": 3417854, "index_size": 5999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19007, "raw_average_key_size": 20, "raw_value_size": 3398923, "raw_average_value_size": 3600, "num_data_blocks": 271, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848001, "oldest_key_time": 1769848001, "file_creation_time": 1769848228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 488060 microseconds, and 6147 cpu microseconds.
Jan 31 08:30:28 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:28.876769) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3427295 bytes OK
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:28.876785) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.043053) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.043106) EVENT_LOG_v1 {"time_micros": 1769848229043097, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.043134) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3508526, prev total WAL file size 3508526, number of live WAL files 2.
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.043905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3346KB)], [50(7864KB)]
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848229043943, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11480614, "oldest_snapshot_seqno": -1}
Jan 31 08:30:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4864 keys, 9709115 bytes, temperature: kUnknown
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848229333811, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9709115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9673755, "index_size": 22130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 119117, "raw_average_key_size": 24, "raw_value_size": 9583045, "raw_average_value_size": 1970, "num_data_blocks": 929, "num_entries": 4864, "num_filter_entries": 4864, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769848229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.334015) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9709115 bytes
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.427393) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.6 rd, 33.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5385, records dropped: 521 output_compression: NoCompression
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.427431) EVENT_LOG_v1 {"time_micros": 1769848229427418, "job": 26, "event": "compaction_finished", "compaction_time_micros": 289931, "compaction_time_cpu_micros": 16088, "output_level": 6, "num_output_files": 1, "total_output_size": 9709115, "num_input_records": 5385, "num_output_records": 4864, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848229428113, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848229429021, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.043827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.429143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.429149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.429150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.429152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:29 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:30:29.429153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:29 compute-0 ceph-mon[75294]: pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Jan 31 08:30:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 08:30:31 compute-0 ceph-mon[75294]: pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 08:30:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:32 compute-0 ceph-mon[75294]: pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:35 compute-0 ceph-mon[75294]: pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:37 compute-0 ceph-mon[75294]: pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:30:38 compute-0 podman[251375]: 2026-01-31 08:30:38.172566533 +0000 UTC m=+0.045569821 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:30:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:30:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393990062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:30:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:30:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393990062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:30:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:39 compute-0 ceph-mon[75294]: pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:30:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1393990062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:30:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1393990062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:30:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:41 compute-0 ceph-mon[75294]: pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:42 compute-0 podman[251396]: 2026-01-31 08:30:42.217249412 +0000 UTC m=+0.089958560 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:30:42 compute-0 ceph-mon[75294]: pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:45 compute-0 ceph-mon[75294]: pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:30:46.973 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:30:46.973 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:30:46.973 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:47 compute-0 ceph-mon[75294]: pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:48 compute-0 ceph-mon[75294]: pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:30:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:49 compute-0 sudo[251424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:49 compute-0 sudo[251424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:49 compute-0 sudo[251424]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:50 compute-0 sudo[251449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:30:50 compute-0 sudo[251449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:30:50 compute-0 sudo[251449]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:30:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:30:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:30:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:30:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:30:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:30:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:30:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:30:50 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:30:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:30:50 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:50 compute-0 sudo[251505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:50 compute-0 sudo[251505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:50 compute-0 sudo[251505]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:50 compute-0 sudo[251530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:30:50 compute-0 sudo[251530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:30:50
Jan 31 08:30:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:30:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:30:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'backups', '.rgw.root', 'vms']
Jan 31 08:30:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:30:50 compute-0 podman[251567]: 2026-01-31 08:30:50.966584591 +0000 UTC m=+0.076351910 container create d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_keldysh, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:51 compute-0 podman[251567]: 2026-01-31 08:30:50.90742645 +0000 UTC m=+0.017193779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:51 compute-0 systemd[1]: Started libpod-conmon-d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9.scope.
Jan 31 08:30:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:51 compute-0 podman[251567]: 2026-01-31 08:30:51.108004591 +0000 UTC m=+0.217771930 container init d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:30:51 compute-0 podman[251567]: 2026-01-31 08:30:51.114822107 +0000 UTC m=+0.224589416 container start d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_keldysh, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:30:51 compute-0 friendly_keldysh[251583]: 167 167
Jan 31 08:30:51 compute-0 systemd[1]: libpod-d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9.scope: Deactivated successfully.
Jan 31 08:30:51 compute-0 podman[251567]: 2026-01-31 08:30:51.149257045 +0000 UTC m=+0.259024374 container attach d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 08:30:51 compute-0 podman[251567]: 2026-01-31 08:30:51.149970864 +0000 UTC m=+0.259738163 container died d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_keldysh, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:30:51 compute-0 ceph-mon[75294]: pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:30:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:30:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:30:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:30:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:30:51 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:30:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-90dfce433cb33d97ddda7a6c22f4e06d734e2554f9612ca8cf3e7bc36a81e425-merged.mount: Deactivated successfully.
Jan 31 08:30:51 compute-0 podman[251567]: 2026-01-31 08:30:51.617806184 +0000 UTC m=+0.727573553 container remove d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_keldysh, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:30:51 compute-0 systemd[1]: libpod-conmon-d74b148a65ba8399941b2c50a38c8c7df5def7a70117aa524e925846eced80e9.scope: Deactivated successfully.
Jan 31 08:30:51 compute-0 podman[251609]: 2026-01-31 08:30:51.75610746 +0000 UTC m=+0.026347129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:51 compute-0 podman[251609]: 2026-01-31 08:30:51.915593553 +0000 UTC m=+0.185833192 container create a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:51 compute-0 systemd[1]: Started libpod-conmon-a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb.scope.
Jan 31 08:30:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f726ef3264b32bbcad71832db77f33277bf0837b9d3719c99996fb51b7ae1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f726ef3264b32bbcad71832db77f33277bf0837b9d3719c99996fb51b7ae1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f726ef3264b32bbcad71832db77f33277bf0837b9d3719c99996fb51b7ae1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f726ef3264b32bbcad71832db77f33277bf0837b9d3719c99996fb51b7ae1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f726ef3264b32bbcad71832db77f33277bf0837b9d3719c99996fb51b7ae1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:52 compute-0 podman[251609]: 2026-01-31 08:30:52.10312654 +0000 UTC m=+0.373366199 container init a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:30:52 compute-0 podman[251609]: 2026-01-31 08:30:52.110718246 +0000 UTC m=+0.380957885 container start a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:52 compute-0 podman[251609]: 2026-01-31 08:30:52.149528383 +0000 UTC m=+0.419768042 container attach a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:30:52 compute-0 busy_nightingale[251625]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:30:52 compute-0 busy_nightingale[251625]: --> All data devices are unavailable
Jan 31 08:30:52 compute-0 systemd[1]: libpod-a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb.scope: Deactivated successfully.
Jan 31 08:30:52 compute-0 podman[251609]: 2026-01-31 08:30:52.53172409 +0000 UTC m=+0.801963739 container died a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-66f726ef3264b32bbcad71832db77f33277bf0837b9d3719c99996fb51b7ae1b-merged.mount: Deactivated successfully.
Jan 31 08:30:53 compute-0 podman[251609]: 2026-01-31 08:30:53.175237374 +0000 UTC m=+1.445477013 container remove a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:53 compute-0 systemd[1]: libpod-conmon-a8b710c7a37e9bd9246416cfc38a0d7b673e32250ba891482550e1159e66dafb.scope: Deactivated successfully.
Jan 31 08:30:53 compute-0 sudo[251530]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:53 compute-0 sudo[251658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:53 compute-0 sudo[251658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:53 compute-0 sudo[251658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:53 compute-0 sudo[251683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:30:53 compute-0 sudo[251683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:53 compute-0 ceph-mon[75294]: pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:53 compute-0 podman[251720]: 2026-01-31 08:30:53.643457283 +0000 UTC m=+0.104814375 container create f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:30:53 compute-0 podman[251720]: 2026-01-31 08:30:53.563818465 +0000 UTC m=+0.025175577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:53 compute-0 systemd[1]: Started libpod-conmon-f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db.scope.
Jan 31 08:30:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:53 compute-0 podman[251720]: 2026-01-31 08:30:53.811483669 +0000 UTC m=+0.272840771 container init f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:30:53 compute-0 podman[251720]: 2026-01-31 08:30:53.817528633 +0000 UTC m=+0.278885735 container start f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:53 compute-0 pensive_murdock[251736]: 167 167
Jan 31 08:30:53 compute-0 systemd[1]: libpod-f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db.scope: Deactivated successfully.
Jan 31 08:30:53 compute-0 podman[251720]: 2026-01-31 08:30:53.849679748 +0000 UTC m=+0.311036870 container attach f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 31 08:30:53 compute-0 podman[251720]: 2026-01-31 08:30:53.850325857 +0000 UTC m=+0.311682949 container died f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6055ea2893846290ff2f5b2470846c0f71c5cef2c2beeafb6772e8474839e91-merged.mount: Deactivated successfully.
Jan 31 08:30:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:54 compute-0 podman[251720]: 2026-01-31 08:30:54.064110117 +0000 UTC m=+0.525467229 container remove f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_murdock, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:54 compute-0 systemd[1]: libpod-conmon-f637b3c9d64d049774e8a3023799c3231c514bce8bcb9807b981315a8709b2db.scope: Deactivated successfully.
Jan 31 08:30:54 compute-0 podman[251759]: 2026-01-31 08:30:54.250528414 +0000 UTC m=+0.087557345 container create 45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:30:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:54 compute-0 podman[251759]: 2026-01-31 08:30:54.18645832 +0000 UTC m=+0.023487261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:54 compute-0 systemd[1]: Started libpod-conmon-45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96.scope.
Jan 31 08:30:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fdc5ee3aa2831830b3eed6b8b666905a4d7a8f0fa0e07dbd8378994468cfb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fdc5ee3aa2831830b3eed6b8b666905a4d7a8f0fa0e07dbd8378994468cfb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fdc5ee3aa2831830b3eed6b8b666905a4d7a8f0fa0e07dbd8378994468cfb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4fdc5ee3aa2831830b3eed6b8b666905a4d7a8f0fa0e07dbd8378994468cfb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:54 compute-0 podman[251759]: 2026-01-31 08:30:54.411301882 +0000 UTC m=+0.248330833 container init 45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bardeen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:54 compute-0 podman[251759]: 2026-01-31 08:30:54.418112657 +0000 UTC m=+0.255141588 container start 45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:54 compute-0 podman[251759]: 2026-01-31 08:30:54.472404996 +0000 UTC m=+0.309433937 container attach 45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bardeen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]: {
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:     "0": [
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:         {
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "devices": [
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "/dev/loop3"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             ],
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_name": "ceph_lv0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_size": "21470642176",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "name": "ceph_lv0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "tags": {
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cluster_name": "ceph",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.crush_device_class": "",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.encrypted": "0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.objectstore": "bluestore",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osd_id": "0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.type": "block",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.vdo": "0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.with_tpm": "0"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             },
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "type": "block",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "vg_name": "ceph_vg0"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:         }
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:     ],
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:     "1": [
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:         {
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "devices": [
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "/dev/loop4"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             ],
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_name": "ceph_lv1",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_size": "21470642176",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "name": "ceph_lv1",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "tags": {
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cluster_name": "ceph",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.crush_device_class": "",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.encrypted": "0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.objectstore": "bluestore",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osd_id": "1",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.type": "block",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.vdo": "0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.with_tpm": "0"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             },
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "type": "block",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "vg_name": "ceph_vg1"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:         }
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:     ],
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:     "2": [
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:         {
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "devices": [
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "/dev/loop5"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             ],
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_name": "ceph_lv2",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_size": "21470642176",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "name": "ceph_lv2",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "tags": {
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.cluster_name": "ceph",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.crush_device_class": "",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.encrypted": "0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.objectstore": "bluestore",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osd_id": "2",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.type": "block",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.vdo": "0",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:                 "ceph.with_tpm": "0"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             },
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "type": "block",
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:             "vg_name": "ceph_vg2"
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:         }
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]:     ]
Jan 31 08:30:54 compute-0 lucid_bardeen[251774]: }
Jan 31 08:30:54 compute-0 systemd[1]: libpod-45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96.scope: Deactivated successfully.
Jan 31 08:30:54 compute-0 podman[251759]: 2026-01-31 08:30:54.701187815 +0000 UTC m=+0.538216776 container died 45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4fdc5ee3aa2831830b3eed6b8b666905a4d7a8f0fa0e07dbd8378994468cfb8-merged.mount: Deactivated successfully.
Jan 31 08:30:54 compute-0 podman[251759]: 2026-01-31 08:30:54.977264874 +0000 UTC m=+0.814293815 container remove 45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bardeen, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:30:54 compute-0 systemd[1]: libpod-conmon-45569f5bb87c48a5890a30d7814b0ac0c4c05ccabf567bb2337fae9c8c517e96.scope: Deactivated successfully.
Jan 31 08:30:55 compute-0 sudo[251683]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:55 compute-0 sudo[251799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:55 compute-0 sudo[251799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:55 compute-0 sudo[251799]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:55 compute-0 sudo[251824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:30:55 compute-0 sudo[251824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:55 compute-0 podman[251862]: 2026-01-31 08:30:55.392681335 +0000 UTC m=+0.048836701 container create 74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:30:55 compute-0 systemd[1]: Started libpod-conmon-74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08.scope.
Jan 31 08:30:55 compute-0 podman[251862]: 2026-01-31 08:30:55.367228862 +0000 UTC m=+0.023384258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:55 compute-0 podman[251862]: 2026-01-31 08:30:55.516419535 +0000 UTC m=+0.172574901 container init 74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:55 compute-0 podman[251862]: 2026-01-31 08:30:55.521815782 +0000 UTC m=+0.177971148 container start 74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_knuth, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:30:55 compute-0 pensive_knuth[251878]: 167 167
Jan 31 08:30:55 compute-0 systemd[1]: libpod-74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08.scope: Deactivated successfully.
Jan 31 08:30:55 compute-0 ceph-mon[75294]: pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:55 compute-0 podman[251862]: 2026-01-31 08:30:55.560065344 +0000 UTC m=+0.216220710 container attach 74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:30:55 compute-0 podman[251862]: 2026-01-31 08:30:55.560757082 +0000 UTC m=+0.216912458 container died 74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_knuth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e3c689e6a411a899f84a9d45c24c2f5fa65630eb79f7ae22e33aaf980f9809f-merged.mount: Deactivated successfully.
Jan 31 08:30:55 compute-0 podman[251862]: 2026-01-31 08:30:55.710584042 +0000 UTC m=+0.366739408 container remove 74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_knuth, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:30:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:30:55 compute-0 systemd[1]: libpod-conmon-74e4e86b51152357a34220c76d08acd43360025523039036dedbbd5ecdc89a08.scope: Deactivated successfully.
Jan 31 08:30:55 compute-0 podman[251902]: 2026-01-31 08:30:55.853838124 +0000 UTC m=+0.061847626 container create 679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:30:55 compute-0 podman[251902]: 2026-01-31 08:30:55.808367505 +0000 UTC m=+0.016377027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:55 compute-0 systemd[1]: Started libpod-conmon-679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7.scope.
Jan 31 08:30:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d832d8375a356bbdf46092218f96f41c2c102689cc2cf9c577404a70cb6794cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d832d8375a356bbdf46092218f96f41c2c102689cc2cf9c577404a70cb6794cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d832d8375a356bbdf46092218f96f41c2c102689cc2cf9c577404a70cb6794cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d832d8375a356bbdf46092218f96f41c2c102689cc2cf9c577404a70cb6794cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:56 compute-0 podman[251902]: 2026-01-31 08:30:56.027253315 +0000 UTC m=+0.235262857 container init 679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:56 compute-0 podman[251902]: 2026-01-31 08:30:56.03661627 +0000 UTC m=+0.244625812 container start 679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:30:56 compute-0 podman[251902]: 2026-01-31 08:30:56.050901339 +0000 UTC m=+0.258910921 container attach 679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 31 08:30:56 compute-0 lvm[251994]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:30:56 compute-0 lvm[251994]: VG ceph_vg0 finished
Jan 31 08:30:56 compute-0 lvm[251997]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:30:56 compute-0 lvm[251997]: VG ceph_vg1 finished
Jan 31 08:30:56 compute-0 lvm[251999]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:30:56 compute-0 lvm[251999]: VG ceph_vg2 finished
Jan 31 08:30:56 compute-0 fervent_visvesvaraya[251918]: {}
Jan 31 08:30:56 compute-0 systemd[1]: libpod-679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7.scope: Deactivated successfully.
Jan 31 08:30:56 compute-0 systemd[1]: libpod-679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7.scope: Consumed 1.190s CPU time.
Jan 31 08:30:56 compute-0 podman[251902]: 2026-01-31 08:30:56.876199533 +0000 UTC m=+1.084209035 container died 679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d832d8375a356bbdf46092218f96f41c2c102689cc2cf9c577404a70cb6794cc-merged.mount: Deactivated successfully.
Jan 31 08:30:56 compute-0 podman[251902]: 2026-01-31 08:30:56.940070571 +0000 UTC m=+1.148080083 container remove 679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:30:56 compute-0 systemd[1]: libpod-conmon-679ace56cbcc991f76e084f6b9f3314f8c0233eac971c95a708e08a7292077a7.scope: Deactivated successfully.
Jan 31 08:30:56 compute-0 sudo[251824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:30:56 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:30:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:30:57 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:30:57 compute-0 sudo[252013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:30:57 compute-0 sudo[252013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:57 compute-0 sudo[252013]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:57 compute-0 ceph-mon[75294]: pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:30:57 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:30:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:59 compute-0 ceph-mon[75294]: pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:01 compute-0 nova_compute[240062]: 2026-01-31 08:31:01.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:01 compute-0 ceph-mon[75294]: pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.321 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.321 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.321 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.322 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.322 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:03 compute-0 ceph-mon[75294]: pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:31:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495032523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:03 compute-0 nova_compute[240062]: 2026-01-31 08:31:03.914 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.040 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.042 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.042 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.042 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.452 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.452 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.508 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing inventories for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.559 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating ProviderTree inventory for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.560 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.573 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing aggregate associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.602 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing trait associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:31:04 compute-0 nova_compute[240062]: 2026-01-31 08:31:04.624 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/495032523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:31:05 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172304324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:05 compute-0 nova_compute[240062]: 2026-01-31 08:31:05.262 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:05 compute-0 nova_compute[240062]: 2026-01-31 08:31:05.268 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:05 compute-0 nova_compute[240062]: 2026-01-31 08:31:05.378 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:05 compute-0 nova_compute[240062]: 2026-01-31 08:31:05.380 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:31:05 compute-0 nova_compute[240062]: 2026-01-31 08:31:05.380 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:06 compute-0 ceph-mon[75294]: pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3172304324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:31:07 compute-0 nova_compute[240062]: 2026-01-31 08:31:07.380 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:07 compute-0 nova_compute[240062]: 2026-01-31 08:31:07.381 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:07 compute-0 nova_compute[240062]: 2026-01-31 08:31:07.381 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:31:07 compute-0 nova_compute[240062]: 2026-01-31 08:31:07.381 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:31:07 compute-0 nova_compute[240062]: 2026-01-31 08:31:07.475 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:31:07 compute-0 nova_compute[240062]: 2026-01-31 08:31:07.475 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:07 compute-0 nova_compute[240062]: 2026-01-31 08:31:07.475 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:07 compute-0 ceph-mon[75294]: pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:08 compute-0 nova_compute[240062]: 2026-01-31 08:31:08.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:08 compute-0 nova_compute[240062]: 2026-01-31 08:31:08.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:31:09 compute-0 podman[252083]: 2026-01-31 08:31:09.584402484 +0000 UTC m=+0.456644946 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:31:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:09 compute-0 ceph-mon[75294]: pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:10 compute-0 ceph-mon[75294]: pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:12 compute-0 nova_compute[240062]: 2026-01-31 08:31:12.266 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:12 compute-0 nova_compute[240062]: 2026-01-31 08:31:12.266 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:12 compute-0 nova_compute[240062]: 2026-01-31 08:31:12.267 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:12 compute-0 nova_compute[240062]: 2026-01-31 08:31:12.267 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:31:13 compute-0 podman[252103]: 2026-01-31 08:31:13.221572007 +0000 UTC m=+0.087377301 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 31 08:31:13 compute-0 ceph-mon[75294]: pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:15 compute-0 nova_compute[240062]: 2026-01-31 08:31:15.127 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:31:15 compute-0 ceph-mon[75294]: pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:17 compute-0 ceph-mon[75294]: pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:17 compute-0 nova_compute[240062]: 2026-01-31 08:31:17.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:19 compute-0 ceph-mon[75294]: pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:21 compute-0 ceph-mon[75294]: pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:23 compute-0 ceph-mon[75294]: pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:24 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:24 compute-0 ceph-mon[75294]: pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:27 compute-0 ceph-mon[75294]: pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:29 compute-0 ceph-mon[75294]: pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:29 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:31 compute-0 ceph-mon[75294]: pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:33 compute-0 ceph-mon[75294]: pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:35 compute-0 ceph-mon[75294]: pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:36 compute-0 ceph-mon[75294]: pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:39 compute-0 ceph-mon[75294]: pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:31:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1425908336' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:31:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:31:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1425908336' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:31:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:40 compute-0 podman[252130]: 2026-01-31 08:31:40.162476666 +0000 UTC m=+0.037731968 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:31:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1425908336' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:31:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1425908336' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:31:41 compute-0 ceph-mon[75294]: pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:43 compute-0 sshd-session[252149]: Invalid user sol from 193.32.162.145 port 36638
Jan 31 08:31:43 compute-0 podman[252151]: 2026-01-31 08:31:43.404491228 +0000 UTC m=+0.059923753 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:31:43 compute-0 sshd-session[252149]: Connection closed by invalid user sol 193.32.162.145 port 36638 [preauth]
Jan 31 08:31:43 compute-0 ceph-mon[75294]: pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:45 compute-0 ceph-mon[75294]: pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:31:46.974 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:31:46.975 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:31:46.975 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:47 compute-0 ceph-mon[75294]: pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:49 compute-0 ceph-mon[75294]: pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:31:50
Jan 31 08:31:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:31:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:31:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta']
Jan 31 08:31:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:31:51 compute-0 ceph-mon[75294]: pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:53 compute-0 ceph-mon[75294]: pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:55 compute-0 ceph-mon[75294]: pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:31:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:31:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:57 compute-0 sudo[252178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:57 compute-0 sudo[252178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:57 compute-0 sudo[252178]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:57 compute-0 sudo[252203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:31:57 compute-0 sudo[252203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:57 compute-0 ceph-mon[75294]: pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:57 compute-0 podman[252272]: 2026-01-31 08:31:57.611350136 +0000 UTC m=+0.079744202 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:31:57 compute-0 podman[252272]: 2026-01-31 08:31:57.728093065 +0000 UTC m=+0.196487111 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:31:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:58 compute-0 sudo[252203]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:31:58 compute-0 sudo[252457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:58 compute-0 sudo[252457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:58 compute-0 sudo[252457]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:58 compute-0 sudo[252482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:31:58 compute-0 sudo[252482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:58 compute-0 sudo[252482]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:31:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:31:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:59 compute-0 sudo[252539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:59 compute-0 sudo[252539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:59 compute-0 sudo[252539]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:59 compute-0 sudo[252564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:31:59 compute-0 sudo[252564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:59 compute-0 podman[252602]: 2026-01-31 08:31:59.256128183 +0000 UTC m=+0.019259184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:31:59 compute-0 podman[252602]: 2026-01-31 08:31:59.432356852 +0000 UTC m=+0.195487843 container create 24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hertz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:31:59 compute-0 ceph-mon[75294]: pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:31:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:59 compute-0 systemd[1]: Started libpod-conmon-24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d.scope.
Jan 31 08:31:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:59 compute-0 podman[252602]: 2026-01-31 08:31:59.942928416 +0000 UTC m=+0.706059447 container init 24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:59 compute-0 podman[252602]: 2026-01-31 08:31:59.948885958 +0000 UTC m=+0.712016949 container start 24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:31:59 compute-0 kind_hertz[252618]: 167 167
Jan 31 08:31:59 compute-0 systemd[1]: libpod-24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d.scope: Deactivated successfully.
Jan 31 08:31:59 compute-0 conmon[252618]: conmon 24fc924149692a90ecbc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d.scope/container/memory.events
Jan 31 08:32:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:00 compute-0 podman[252602]: 2026-01-31 08:32:00.213861242 +0000 UTC m=+0.976992233 container attach 24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:32:00 compute-0 podman[252602]: 2026-01-31 08:32:00.214306565 +0000 UTC m=+0.977437556 container died 24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-67c7e4ba523ffc144a6eaf76eb94afa023b6dfbdda3a45151a3970153b8b0a24-merged.mount: Deactivated successfully.
Jan 31 08:32:01 compute-0 podman[252602]: 2026-01-31 08:32:01.421449788 +0000 UTC m=+2.184580769 container remove 24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:32:01 compute-0 systemd[1]: libpod-conmon-24fc924149692a90ecbc09d665730c917d96b96d850bb2f30497eaaef78a836d.scope: Deactivated successfully.
Jan 31 08:32:01 compute-0 podman[252642]: 2026-01-31 08:32:01.577034804 +0000 UTC m=+0.078415636 container create f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:32:01 compute-0 podman[252642]: 2026-01-31 08:32:01.517645947 +0000 UTC m=+0.019026809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:01 compute-0 systemd[1]: Started libpod-conmon-f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0.scope.
Jan 31 08:32:01 compute-0 ceph-mon[75294]: pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f92c4e2379bc96a77b47562d3c38204d06e55a5bc752c2538b1ac78c2dd099e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f92c4e2379bc96a77b47562d3c38204d06e55a5bc752c2538b1ac78c2dd099e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f92c4e2379bc96a77b47562d3c38204d06e55a5bc752c2538b1ac78c2dd099e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f92c4e2379bc96a77b47562d3c38204d06e55a5bc752c2538b1ac78c2dd099e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f92c4e2379bc96a77b47562d3c38204d06e55a5bc752c2538b1ac78c2dd099e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:01 compute-0 podman[252642]: 2026-01-31 08:32:01.94815985 +0000 UTC m=+0.449540702 container init f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_noether, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:32:01 compute-0 podman[252642]: 2026-01-31 08:32:01.954018619 +0000 UTC m=+0.455399451 container start f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:01 compute-0 podman[252642]: 2026-01-31 08:32:01.980997174 +0000 UTC m=+0.482378026 container attach f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:32:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:02 compute-0 determined_noether[252658]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:32:02 compute-0 determined_noether[252658]: --> All data devices are unavailable
Jan 31 08:32:02 compute-0 systemd[1]: libpod-f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0.scope: Deactivated successfully.
Jan 31 08:32:02 compute-0 podman[252678]: 2026-01-31 08:32:02.398631467 +0000 UTC m=+0.024268993 container died f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f92c4e2379bc96a77b47562d3c38204d06e55a5bc752c2538b1ac78c2dd099e-merged.mount: Deactivated successfully.
Jan 31 08:32:02 compute-0 podman[252678]: 2026-01-31 08:32:02.60182818 +0000 UTC m=+0.227465696 container remove f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:32:02 compute-0 systemd[1]: libpod-conmon-f2bbb904c484e03b09d6e83157d9707d25f92476bc890e052d6a69381eb7e1a0.scope: Deactivated successfully.
Jan 31 08:32:02 compute-0 sudo[252564]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:02 compute-0 sudo[252693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:02 compute-0 sudo[252693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:02 compute-0 sudo[252693]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:02 compute-0 sudo[252718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:32:02 compute-0 sudo[252718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:02 compute-0 ceph-mon[75294]: pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:03 compute-0 podman[252755]: 2026-01-31 08:32:03.002787908 +0000 UTC m=+0.018316840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:03 compute-0 podman[252755]: 2026-01-31 08:32:03.137380902 +0000 UTC m=+0.152909804 container create 6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:32:03 compute-0 systemd[1]: Started libpod-conmon-6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3.scope.
Jan 31 08:32:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:03 compute-0 podman[252755]: 2026-01-31 08:32:03.350447255 +0000 UTC m=+0.365976177 container init 6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:03 compute-0 podman[252755]: 2026-01-31 08:32:03.35504064 +0000 UTC m=+0.370569542 container start 6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:03 compute-0 keen_kilby[252771]: 167 167
Jan 31 08:32:03 compute-0 systemd[1]: libpod-6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3.scope: Deactivated successfully.
Jan 31 08:32:03 compute-0 podman[252755]: 2026-01-31 08:32:03.377984365 +0000 UTC m=+0.393513277 container attach 6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:32:03 compute-0 podman[252755]: 2026-01-31 08:32:03.378499539 +0000 UTC m=+0.394028441 container died 6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-baca86c133e3aeafbd6dd043d8899dc6a4dabe7fd0252659d4290d6e54a3d27f-merged.mount: Deactivated successfully.
Jan 31 08:32:03 compute-0 podman[252755]: 2026-01-31 08:32:03.445784741 +0000 UTC m=+0.461313643 container remove 6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:03 compute-0 systemd[1]: libpod-conmon-6c95789b2106391066e48c6a21d4065bf397235bcb67be70d5791aeef3ef87e3.scope: Deactivated successfully.
Jan 31 08:32:03 compute-0 podman[252795]: 2026-01-31 08:32:03.563387364 +0000 UTC m=+0.036373572 container create dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:32:03 compute-0 systemd[1]: Started libpod-conmon-dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b.scope.
Jan 31 08:32:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36163a23a03592f7edad1b7bff4eedae7e0a985625e198ca47d7f3f82b0937f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36163a23a03592f7edad1b7bff4eedae7e0a985625e198ca47d7f3f82b0937f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36163a23a03592f7edad1b7bff4eedae7e0a985625e198ca47d7f3f82b0937f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36163a23a03592f7edad1b7bff4eedae7e0a985625e198ca47d7f3f82b0937f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:03 compute-0 podman[252795]: 2026-01-31 08:32:03.545827035 +0000 UTC m=+0.018813183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:03 compute-0 podman[252795]: 2026-01-31 08:32:03.641015947 +0000 UTC m=+0.114002075 container init dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:03 compute-0 podman[252795]: 2026-01-31 08:32:03.64771918 +0000 UTC m=+0.120705308 container start dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_bassi, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:03 compute-0 podman[252795]: 2026-01-31 08:32:03.698095302 +0000 UTC m=+0.171081430 container attach dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_bassi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:32:03 compute-0 elegant_bassi[252811]: {
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:     "0": [
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:         {
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "devices": [
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "/dev/loop3"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             ],
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_name": "ceph_lv0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_size": "21470642176",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "name": "ceph_lv0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "tags": {
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cluster_name": "ceph",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.crush_device_class": "",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.encrypted": "0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.objectstore": "bluestore",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osd_id": "0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.type": "block",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.vdo": "0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.with_tpm": "0"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             },
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "type": "block",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "vg_name": "ceph_vg0"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:         }
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:     ],
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:     "1": [
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:         {
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "devices": [
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "/dev/loop4"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             ],
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_name": "ceph_lv1",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_size": "21470642176",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "name": "ceph_lv1",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "tags": {
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cluster_name": "ceph",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.crush_device_class": "",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.encrypted": "0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.objectstore": "bluestore",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osd_id": "1",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.type": "block",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.vdo": "0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.with_tpm": "0"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             },
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "type": "block",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "vg_name": "ceph_vg1"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:         }
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:     ],
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:     "2": [
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:         {
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "devices": [
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "/dev/loop5"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             ],
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_name": "ceph_lv2",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_size": "21470642176",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "name": "ceph_lv2",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "tags": {
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.cluster_name": "ceph",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.crush_device_class": "",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.encrypted": "0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.objectstore": "bluestore",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osd_id": "2",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.type": "block",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.vdo": "0",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:                 "ceph.with_tpm": "0"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             },
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "type": "block",
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:             "vg_name": "ceph_vg2"
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:         }
Jan 31 08:32:03 compute-0 elegant_bassi[252811]:     ]
Jan 31 08:32:03 compute-0 elegant_bassi[252811]: }
Jan 31 08:32:03 compute-0 systemd[1]: libpod-dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b.scope: Deactivated successfully.
Jan 31 08:32:03 compute-0 podman[252795]: 2026-01-31 08:32:03.929066291 +0000 UTC m=+0.402052429 container died dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_bassi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-36163a23a03592f7edad1b7bff4eedae7e0a985625e198ca47d7f3f82b0937f2-merged.mount: Deactivated successfully.
Jan 31 08:32:03 compute-0 podman[252795]: 2026-01-31 08:32:03.979039001 +0000 UTC m=+0.452025129 container remove dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_bassi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:32:03 compute-0 systemd[1]: libpod-conmon-dd6d207520423b59d0f0a4017a70223e75022f8ddbcaf28156839299a546e87b.scope: Deactivated successfully.
Jan 31 08:32:04 compute-0 sudo[252718]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:04 compute-0 sudo[252832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:04 compute-0 sudo[252832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:04 compute-0 sudo[252832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:04 compute-0 sudo[252857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:32:04 compute-0 sudo[252857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:04 compute-0 podman[252894]: 2026-01-31 08:32:04.427484033 +0000 UTC m=+0.059924223 container create 3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:04 compute-0 podman[252894]: 2026-01-31 08:32:04.3865814 +0000 UTC m=+0.019021610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:04 compute-0 systemd[1]: Started libpod-conmon-3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b.scope.
Jan 31 08:32:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:04 compute-0 podman[252894]: 2026-01-31 08:32:04.657763403 +0000 UTC m=+0.290203653 container init 3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:32:04 compute-0 podman[252894]: 2026-01-31 08:32:04.662896224 +0000 UTC m=+0.295336414 container start 3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:04 compute-0 musing_kare[252910]: 167 167
Jan 31 08:32:04 compute-0 systemd[1]: libpod-3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b.scope: Deactivated successfully.
Jan 31 08:32:04 compute-0 podman[252894]: 2026-01-31 08:32:04.676844243 +0000 UTC m=+0.309284433 container attach 3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:04 compute-0 podman[252894]: 2026-01-31 08:32:04.677275365 +0000 UTC m=+0.309715555 container died 3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kare, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6b2007b6937d13ac43ccb12d451668613cd8769410bdbfbd837d8a91e46f277-merged.mount: Deactivated successfully.
Jan 31 08:32:04 compute-0 podman[252894]: 2026-01-31 08:32:04.747704252 +0000 UTC m=+0.380144442 container remove 3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:04 compute-0 systemd[1]: libpod-conmon-3431bc8a8764ecc745ddba4d6f94503341d8da2e3999bd776f2942820174448b.scope: Deactivated successfully.
Jan 31 08:32:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:04 compute-0 podman[252935]: 2026-01-31 08:32:04.887473989 +0000 UTC m=+0.041953344 container create 7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:32:04 compute-0 systemd[1]: Started libpod-conmon-7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354.scope.
Jan 31 08:32:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291468adc43bae9e277abe789076a8d5c6679338a4c8af16675a4a8277cc5add/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:04 compute-0 podman[252935]: 2026-01-31 08:32:04.869180701 +0000 UTC m=+0.023660076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291468adc43bae9e277abe789076a8d5c6679338a4c8af16675a4a8277cc5add/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291468adc43bae9e277abe789076a8d5c6679338a4c8af16675a4a8277cc5add/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291468adc43bae9e277abe789076a8d5c6679338a4c8af16675a4a8277cc5add/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:04 compute-0 podman[252935]: 2026-01-31 08:32:04.983772081 +0000 UTC m=+0.138251456 container init 7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:32:04 compute-0 podman[252935]: 2026-01-31 08:32:04.991107371 +0000 UTC m=+0.145586726 container start 7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 08:32:04 compute-0 podman[252935]: 2026-01-31 08:32:04.997470834 +0000 UTC m=+0.151950189 container attach 7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_chatterjee, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:05 compute-0 ceph-mon[75294]: pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:05 compute-0 lvm[253029]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:32:05 compute-0 lvm[253029]: VG ceph_vg0 finished
Jan 31 08:32:05 compute-0 lvm[253031]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:32:05 compute-0 lvm[253031]: VG ceph_vg1 finished
Jan 31 08:32:05 compute-0 lvm[253033]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:32:05 compute-0 lvm[253033]: VG ceph_vg2 finished
Jan 31 08:32:05 compute-0 ecstatic_chatterjee[252952]: {}
Jan 31 08:32:05 compute-0 systemd[1]: libpod-7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354.scope: Deactivated successfully.
Jan 31 08:32:05 compute-0 systemd[1]: libpod-7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354.scope: Consumed 1.157s CPU time.
Jan 31 08:32:05 compute-0 podman[252935]: 2026-01-31 08:32:05.802138576 +0000 UTC m=+0.956617941 container died 7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-291468adc43bae9e277abe789076a8d5c6679338a4c8af16675a4a8277cc5add-merged.mount: Deactivated successfully.
Jan 31 08:32:05 compute-0 podman[252935]: 2026-01-31 08:32:05.853857104 +0000 UTC m=+1.008336459 container remove 7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:32:05 compute-0 systemd[1]: libpod-conmon-7b2a1358b494c9fec9f34a09dd539f5f35dbd0f14f517f266e3668b74727b354.scope: Deactivated successfully.
Jan 31 08:32:05 compute-0 sudo[252857]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:32:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:32:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:32:05 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:32:05 compute-0 sudo[253048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:32:05 compute-0 sudo[253048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:05 compute-0 sudo[253048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.362 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.363 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.363 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.363 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.363 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.567 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.569 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.569 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.569 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:32:06 compute-0 nova_compute[240062]: 2026-01-31 08:32:06.569 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:32:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:32:06 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:32:06 compute-0 ceph-mon[75294]: pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:32:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270664686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.121 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.277 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.278 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.278 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.278 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.484 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.484 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:32:07 compute-0 nova_compute[240062]: 2026-01-31 08:32:07.499 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:08 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4270664686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:32:08 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757235307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.076 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.081 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.149 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.150 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.151 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.943 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.943 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:32:08 compute-0 nova_compute[240062]: 2026-01-31 08:32:08.943 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:32:09 compute-0 nova_compute[240062]: 2026-01-31 08:32:09.070 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:32:09 compute-0 nova_compute[240062]: 2026-01-31 08:32:09.070 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:09 compute-0 nova_compute[240062]: 2026-01-31 08:32:09.276 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:10 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/757235307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:10 compute-0 ceph-mon[75294]: pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:11 compute-0 nova_compute[240062]: 2026-01-31 08:32:11.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:11 compute-0 nova_compute[240062]: 2026-01-31 08:32:11.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:11 compute-0 podman[253117]: 2026-01-31 08:32:11.184241862 +0000 UTC m=+0.046374224 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:32:11 compute-0 ceph-mon[75294]: pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:12 compute-0 nova_compute[240062]: 2026-01-31 08:32:12.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:13 compute-0 ceph-mon[75294]: pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:14 compute-0 podman[253136]: 2026-01-31 08:32:14.208518915 +0000 UTC m=+0.084704877 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:32:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.391625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848335391753, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1361, "num_deletes": 507, "total_data_size": 1683264, "memory_usage": 1712240, "flush_reason": "Manual Compaction"}
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848335547892, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1285416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23243, "largest_seqno": 24603, "table_properties": {"data_size": 1280089, "index_size": 2211, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15297, "raw_average_key_size": 18, "raw_value_size": 1267019, "raw_average_value_size": 1566, "num_data_blocks": 101, "num_entries": 809, "num_filter_entries": 809, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848229, "oldest_key_time": 1769848229, "file_creation_time": 1769848335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 156252 microseconds, and 2819 cpu microseconds.
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.547936) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1285416 bytes OK
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.547954) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.641736) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.641784) EVENT_LOG_v1 {"time_micros": 1769848335641774, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.641808) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1676086, prev total WAL file size 1676725, number of live WAL files 2.
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.828395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1255KB)], [53(9481KB)]
Jan 31 08:32:15 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848335828452, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10994531, "oldest_snapshot_seqno": -1}
Jan 31 08:32:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:16 compute-0 ceph-mon[75294]: pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4673 keys, 7923886 bytes, temperature: kUnknown
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848336439595, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7923886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7892192, "index_size": 18912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 116730, "raw_average_key_size": 24, "raw_value_size": 7807207, "raw_average_value_size": 1670, "num_data_blocks": 785, "num_entries": 4673, "num_filter_entries": 4673, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769848335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.439841) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7923886 bytes
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.480237) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.0 rd, 13.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(14.7) write-amplify(6.2) OK, records in: 5673, records dropped: 1000 output_compression: NoCompression
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.480269) EVENT_LOG_v1 {"time_micros": 1769848336480257, "job": 28, "event": "compaction_finished", "compaction_time_micros": 611242, "compaction_time_cpu_micros": 14069, "output_level": 6, "num_output_files": 1, "total_output_size": 7923886, "num_input_records": 5673, "num_output_records": 4673, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848336480550, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848336481376, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:15.828310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.481505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.481511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.481515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.481517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:16 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:32:16.481518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:17 compute-0 ceph-mon[75294]: pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:18 compute-0 ceph-mon[75294]: pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:21 compute-0 ceph-mon[75294]: pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:22 compute-0 sshd-session[253162]: Invalid user solana from 80.94.92.182 port 60790
Jan 31 08:32:22 compute-0 sshd-session[253162]: Connection closed by invalid user solana 80.94.92.182 port 60790 [preauth]
Jan 31 08:32:22 compute-0 ceph-mon[75294]: pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:25 compute-0 ceph-mon[75294]: pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:27 compute-0 ceph-mon[75294]: pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:29 compute-0 ceph-mon[75294]: pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:31 compute-0 ceph-mon[75294]: pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:33 compute-0 ceph-mon[75294]: pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:35 compute-0 ceph-mon[75294]: pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:37 compute-0 ceph-mon[75294]: pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:32:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/607891716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:32:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:32:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/607891716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:32:39 compute-0 ceph-mon[75294]: pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/607891716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:32:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/607891716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:32:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:41 compute-0 ceph-mon[75294]: pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:42 compute-0 podman[253164]: 2026-01-31 08:32:42.169444203 +0000 UTC m=+0.042354214 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 31 08:32:43 compute-0 ceph-mon[75294]: pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:45 compute-0 podman[253184]: 2026-01-31 08:32:45.215580692 +0000 UTC m=+0.092635733 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 08:32:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:45 compute-0 ceph-mon[75294]: pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:32:46.975 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:32:46.976 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:32:46.976 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:47 compute-0 ceph-mon[75294]: pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:49 compute-0 ceph-mon[75294]: pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:32:50
Jan 31 08:32:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:32:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:32:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'vms', 'backups', '.mgr']
Jan 31 08:32:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:32:51 compute-0 ceph-mon[75294]: pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:53 compute-0 ceph-mon[75294]: pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:55 compute-0 ceph-mon[75294]: pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:32:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:32:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:57 compute-0 ceph-mon[75294]: pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:59 compute-0 ceph-mon[75294]: pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:01 compute-0 ceph-mon[75294]: pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:03 compute-0 ceph-mon[75294]: pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:04 compute-0 nova_compute[240062]: 2026-01-31 08:33:04.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:04 compute-0 nova_compute[240062]: 2026-01-31 08:33:04.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:04 compute-0 nova_compute[240062]: 2026-01-31 08:33:04.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:33:04 compute-0 ceph-mon[75294]: pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:05 compute-0 nova_compute[240062]: 2026-01-31 08:33:05.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:05 compute-0 nova_compute[240062]: 2026-01-31 08:33:05.384 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:05 compute-0 nova_compute[240062]: 2026-01-31 08:33:05.384 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:05 compute-0 nova_compute[240062]: 2026-01-31 08:33:05.385 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:05 compute-0 nova_compute[240062]: 2026-01-31 08:33:05.385 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:33:05 compute-0 nova_compute[240062]: 2026-01-31 08:33:05.385 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:33:05 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847126272' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:05 compute-0 nova_compute[240062]: 2026-01-31 08:33:05.906 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:06 compute-0 sudo[253232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:06 compute-0 sudo[253232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:06 compute-0 sudo[253232]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:06 compute-0 nova_compute[240062]: 2026-01-31 08:33:06.032 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:33:06 compute-0 nova_compute[240062]: 2026-01-31 08:33:06.032 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:33:06 compute-0 nova_compute[240062]: 2026-01-31 08:33:06.033 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:06 compute-0 nova_compute[240062]: 2026-01-31 08:33:06.033 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:06 compute-0 sudo[253257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:33:06 compute-0 sudo[253257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3847126272' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:06 compute-0 sudo[253257]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:33:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:33:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:33:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:33:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:33:06 compute-0 nova_compute[240062]: 2026-01-31 08:33:06.863 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:33:06 compute-0 nova_compute[240062]: 2026-01-31 08:33:06.863 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:33:06 compute-0 nova_compute[240062]: 2026-01-31 08:33:06.884 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:33:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:33:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:33:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:33:06 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:33:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:33:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:06 compute-0 sudo[253313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:06 compute-0 sudo[253313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:06 compute-0 sudo[253313]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:07 compute-0 sudo[253347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:33:07 compute-0 sudo[253347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:07 compute-0 podman[253394]: 2026-01-31 08:33:07.279794239 +0000 UTC m=+0.020168021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:33:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784953813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:07 compute-0 podman[253394]: 2026-01-31 08:33:07.425578499 +0000 UTC m=+0.165952281 container create 335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kare, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:33:07 compute-0 nova_compute[240062]: 2026-01-31 08:33:07.444 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:07 compute-0 nova_compute[240062]: 2026-01-31 08:33:07.449 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:07 compute-0 ceph-mon[75294]: pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:33:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:33:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:33:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:33:07 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:07 compute-0 nova_compute[240062]: 2026-01-31 08:33:07.540 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:07 compute-0 nova_compute[240062]: 2026-01-31 08:33:07.542 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:33:07 compute-0 nova_compute[240062]: 2026-01-31 08:33:07.542 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:07 compute-0 systemd[1]: Started libpod-conmon-335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46.scope.
Jan 31 08:33:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:07 compute-0 podman[253394]: 2026-01-31 08:33:07.645567799 +0000 UTC m=+0.385941581 container init 335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kare, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:33:07 compute-0 podman[253394]: 2026-01-31 08:33:07.651513421 +0000 UTC m=+0.391887203 container start 335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kare, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:07 compute-0 trusting_kare[253412]: 167 167
Jan 31 08:33:07 compute-0 systemd[1]: libpod-335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46.scope: Deactivated successfully.
Jan 31 08:33:07 compute-0 podman[253394]: 2026-01-31 08:33:07.722881194 +0000 UTC m=+0.463254996 container attach 335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:33:07 compute-0 podman[253394]: 2026-01-31 08:33:07.723360647 +0000 UTC m=+0.463734429 container died 335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-88426bea30eefea6e8f5bacfc4f6ad912afc6fc50a901a3bf17b19e91332e614-merged.mount: Deactivated successfully.
Jan 31 08:33:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:08 compute-0 podman[253394]: 2026-01-31 08:33:08.242593156 +0000 UTC m=+0.982966938 container remove 335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kare, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:08 compute-0 systemd[1]: libpod-conmon-335d09695ec54edfc520a1b6233c83a5b113caf11bfb0f73d3f829afb09ffe46.scope: Deactivated successfully.
Jan 31 08:33:08 compute-0 podman[253438]: 2026-01-31 08:33:08.38009275 +0000 UTC m=+0.045353406 container create a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jang, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:33:08 compute-0 systemd[1]: Started libpod-conmon-a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7.scope.
Jan 31 08:33:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:08 compute-0 podman[253438]: 2026-01-31 08:33:08.357205017 +0000 UTC m=+0.022465683 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc50164da916ee0cb8f87e06e4bf3d0fdb9b9ca7d7265dda69d2038652947a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc50164da916ee0cb8f87e06e4bf3d0fdb9b9ca7d7265dda69d2038652947a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc50164da916ee0cb8f87e06e4bf3d0fdb9b9ca7d7265dda69d2038652947a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc50164da916ee0cb8f87e06e4bf3d0fdb9b9ca7d7265dda69d2038652947a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc50164da916ee0cb8f87e06e4bf3d0fdb9b9ca7d7265dda69d2038652947a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:08 compute-0 podman[253438]: 2026-01-31 08:33:08.609112947 +0000 UTC m=+0.274373593 container init a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:33:08 compute-0 podman[253438]: 2026-01-31 08:33:08.613878826 +0000 UTC m=+0.279139472 container start a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jang, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:33:08 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/784953813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:08 compute-0 podman[253438]: 2026-01-31 08:33:08.751875724 +0000 UTC m=+0.417136400 container attach a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:33:09 compute-0 gallant_jang[253454]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:33:09 compute-0 gallant_jang[253454]: --> All data devices are unavailable
Jan 31 08:33:09 compute-0 systemd[1]: libpod-a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7.scope: Deactivated successfully.
Jan 31 08:33:09 compute-0 podman[253474]: 2026-01-31 08:33:09.176085276 +0000 UTC m=+0.024354514 container died a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jang, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc50164da916ee0cb8f87e06e4bf3d0fdb9b9ca7d7265dda69d2038652947a3-merged.mount: Deactivated successfully.
Jan 31 08:33:09 compute-0 podman[253474]: 2026-01-31 08:33:09.457225932 +0000 UTC m=+0.305495150 container remove a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:33:09 compute-0 systemd[1]: libpod-conmon-a6a6e89f1abe9e308561f240366717ff24eb002aa2866809a8cd07c492c84cb7.scope: Deactivated successfully.
Jan 31 08:33:09 compute-0 sudo[253347]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:09 compute-0 nova_compute[240062]: 2026-01-31 08:33:09.542 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:09 compute-0 nova_compute[240062]: 2026-01-31 08:33:09.545 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:33:09 compute-0 nova_compute[240062]: 2026-01-31 08:33:09.545 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:33:09 compute-0 sudo[253489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:09 compute-0 sudo[253489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:09 compute-0 sudo[253489]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:09 compute-0 sudo[253514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:33:09 compute-0 sudo[253514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:09 compute-0 nova_compute[240062]: 2026-01-31 08:33:09.670 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:33:09 compute-0 nova_compute[240062]: 2026-01-31 08:33:09.672 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:09 compute-0 nova_compute[240062]: 2026-01-31 08:33:09.672 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:10 compute-0 ceph-mon[75294]: pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:10 compute-0 podman[253552]: 2026-01-31 08:33:10.363344806 +0000 UTC m=+0.057789475 container create 8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:33:10 compute-0 systemd[1]: Started libpod-conmon-8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75.scope.
Jan 31 08:33:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:10 compute-0 podman[253552]: 2026-01-31 08:33:10.333426831 +0000 UTC m=+0.027871500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:10 compute-0 podman[253552]: 2026-01-31 08:33:10.462205728 +0000 UTC m=+0.156650427 container init 8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bassi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:10 compute-0 podman[253552]: 2026-01-31 08:33:10.467532593 +0000 UTC m=+0.161977262 container start 8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:10 compute-0 intelligent_bassi[253567]: 167 167
Jan 31 08:33:10 compute-0 systemd[1]: libpod-8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75.scope: Deactivated successfully.
Jan 31 08:33:10 compute-0 podman[253552]: 2026-01-31 08:33:10.493051938 +0000 UTC m=+0.187496637 container attach 8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:10 compute-0 podman[253552]: 2026-01-31 08:33:10.493688155 +0000 UTC m=+0.188132824 container died 8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bassi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64f02c2ff52d4a5b171e46523c2b95644b737d8aafdf2be58bf9236ba24c872-merged.mount: Deactivated successfully.
Jan 31 08:33:10 compute-0 podman[253552]: 2026-01-31 08:33:10.563499176 +0000 UTC m=+0.257943845 container remove 8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:33:10 compute-0 systemd[1]: libpod-conmon-8328c960d5c91b957a7c46bae6191afa63b0bc0867acdd5ad8e6fbe41b5edd75.scope: Deactivated successfully.
Jan 31 08:33:10 compute-0 podman[253591]: 2026-01-31 08:33:10.690639768 +0000 UTC m=+0.041172903 container create 060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_herschel, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:33:10 compute-0 systemd[1]: Started libpod-conmon-060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b.scope.
Jan 31 08:33:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d9ba70d61702face0fb1e152e4bf4964b4ff11e8e95208ac7c4518b196cd4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d9ba70d61702face0fb1e152e4bf4964b4ff11e8e95208ac7c4518b196cd4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d9ba70d61702face0fb1e152e4bf4964b4ff11e8e95208ac7c4518b196cd4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d9ba70d61702face0fb1e152e4bf4964b4ff11e8e95208ac7c4518b196cd4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:10 compute-0 podman[253591]: 2026-01-31 08:33:10.766041792 +0000 UTC m=+0.116574947 container init 060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:33:10 compute-0 podman[253591]: 2026-01-31 08:33:10.670351735 +0000 UTC m=+0.020884890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:10 compute-0 podman[253591]: 2026-01-31 08:33:10.774030589 +0000 UTC m=+0.124563724 container start 060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:10 compute-0 podman[253591]: 2026-01-31 08:33:10.77812453 +0000 UTC m=+0.128657745 container attach 060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]: {
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:     "0": [
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:         {
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "devices": [
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "/dev/loop3"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             ],
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_name": "ceph_lv0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_size": "21470642176",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "name": "ceph_lv0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "tags": {
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cluster_name": "ceph",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.crush_device_class": "",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.encrypted": "0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.objectstore": "bluestore",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osd_id": "0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.type": "block",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.vdo": "0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.with_tpm": "0"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             },
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "type": "block",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "vg_name": "ceph_vg0"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:         }
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:     ],
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:     "1": [
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:         {
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "devices": [
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "/dev/loop4"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             ],
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_name": "ceph_lv1",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_size": "21470642176",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "name": "ceph_lv1",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "tags": {
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cluster_name": "ceph",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.crush_device_class": "",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.encrypted": "0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.objectstore": "bluestore",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osd_id": "1",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.type": "block",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.vdo": "0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.with_tpm": "0"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             },
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "type": "block",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "vg_name": "ceph_vg1"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:         }
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:     ],
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:     "2": [
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:         {
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "devices": [
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "/dev/loop5"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             ],
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_name": "ceph_lv2",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_size": "21470642176",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "name": "ceph_lv2",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "tags": {
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.cluster_name": "ceph",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.crush_device_class": "",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.encrypted": "0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.objectstore": "bluestore",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osd_id": "2",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.type": "block",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.vdo": "0",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:                 "ceph.with_tpm": "0"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             },
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "type": "block",
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:             "vg_name": "ceph_vg2"
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:         }
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]:     ]
Jan 31 08:33:11 compute-0 sleepy_herschel[253607]: }
Jan 31 08:33:11 compute-0 systemd[1]: libpod-060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b.scope: Deactivated successfully.
Jan 31 08:33:11 compute-0 podman[253591]: 2026-01-31 08:33:11.029450665 +0000 UTC m=+0.379983810 container died 060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:11 compute-0 nova_compute[240062]: 2026-01-31 08:33:11.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:11 compute-0 nova_compute[240062]: 2026-01-31 08:33:11.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d9ba70d61702face0fb1e152e4bf4964b4ff11e8e95208ac7c4518b196cd4a-merged.mount: Deactivated successfully.
Jan 31 08:33:11 compute-0 podman[253591]: 2026-01-31 08:33:11.231353632 +0000 UTC m=+0.581886767 container remove 060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:33:11 compute-0 systemd[1]: libpod-conmon-060f839ffba5c92b4e8de03f26539bb16716e106bf33149e35d89fedab7ceb9b.scope: Deactivated successfully.
Jan 31 08:33:11 compute-0 sudo[253514]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:11 compute-0 sudo[253628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:11 compute-0 sudo[253628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:11 compute-0 sudo[253628]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:11 compute-0 sudo[253653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:33:11 compute-0 sudo[253653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:11 compute-0 ceph-mon[75294]: pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:11 compute-0 podman[253690]: 2026-01-31 08:33:11.618561485 +0000 UTC m=+0.043496445 container create 9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mclaren, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:33:11 compute-0 systemd[1]: Started libpod-conmon-9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88.scope.
Jan 31 08:33:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:11 compute-0 podman[253690]: 2026-01-31 08:33:11.594347616 +0000 UTC m=+0.019282606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:11 compute-0 podman[253690]: 2026-01-31 08:33:11.708965027 +0000 UTC m=+0.133900007 container init 9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:33:11 compute-0 podman[253690]: 2026-01-31 08:33:11.714461307 +0000 UTC m=+0.139396267 container start 9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:33:11 compute-0 festive_mclaren[253707]: 167 167
Jan 31 08:33:11 compute-0 systemd[1]: libpod-9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88.scope: Deactivated successfully.
Jan 31 08:33:11 compute-0 conmon[253707]: conmon 9a186b29b8aff4f34c20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88.scope/container/memory.events
Jan 31 08:33:11 compute-0 podman[253690]: 2026-01-31 08:33:11.723052051 +0000 UTC m=+0.147987031 container attach 9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:11 compute-0 podman[253690]: 2026-01-31 08:33:11.723787921 +0000 UTC m=+0.148722891 container died 9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a2da73995b0b9971640025814544a23429a68ac5d2935544873ba06533ed6c2-merged.mount: Deactivated successfully.
Jan 31 08:33:11 compute-0 podman[253690]: 2026-01-31 08:33:11.812296381 +0000 UTC m=+0.237231341 container remove 9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:33:11 compute-0 systemd[1]: libpod-conmon-9a186b29b8aff4f34c20edc3a717910c2b3d27fb0ce530414815382e5e694e88.scope: Deactivated successfully.
Jan 31 08:33:11 compute-0 podman[253731]: 2026-01-31 08:33:11.939067213 +0000 UTC m=+0.039306001 container create 465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:33:11 compute-0 systemd[1]: Started libpod-conmon-465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a.scope.
Jan 31 08:33:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c20c80db220ce1ca6fac0435b23c20bddf67e6f814ccf9aa4b555da2e8f40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c20c80db220ce1ca6fac0435b23c20bddf67e6f814ccf9aa4b555da2e8f40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c20c80db220ce1ca6fac0435b23c20bddf67e6f814ccf9aa4b555da2e8f40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c20c80db220ce1ca6fac0435b23c20bddf67e6f814ccf9aa4b555da2e8f40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:12 compute-0 podman[253731]: 2026-01-31 08:33:11.921046372 +0000 UTC m=+0.021285180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:12 compute-0 podman[253731]: 2026-01-31 08:33:12.044580407 +0000 UTC m=+0.144819215 container init 465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:33:12 compute-0 podman[253731]: 2026-01-31 08:33:12.052683617 +0000 UTC m=+0.152922395 container start 465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:33:12 compute-0 podman[253731]: 2026-01-31 08:33:12.065046954 +0000 UTC m=+0.165285752 container attach 465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:12 compute-0 nova_compute[240062]: 2026-01-31 08:33:12.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:12 compute-0 lvm[253832]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:33:12 compute-0 lvm[253833]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:33:12 compute-0 lvm[253832]: VG ceph_vg0 finished
Jan 31 08:33:12 compute-0 lvm[253833]: VG ceph_vg1 finished
Jan 31 08:33:12 compute-0 lvm[253838]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:33:12 compute-0 lvm[253838]: VG ceph_vg2 finished
Jan 31 08:33:12 compute-0 podman[253823]: 2026-01-31 08:33:12.753607924 +0000 UTC m=+0.063552261 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:33:12 compute-0 loving_sanderson[253748]: {}
Jan 31 08:33:12 compute-0 systemd[1]: libpod-465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a.scope: Deactivated successfully.
Jan 31 08:33:12 compute-0 systemd[1]: libpod-465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a.scope: Consumed 1.125s CPU time.
Jan 31 08:33:12 compute-0 podman[253731]: 2026-01-31 08:33:12.850289116 +0000 UTC m=+0.950527914 container died 465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb7c20c80db220ce1ca6fac0435b23c20bddf67e6f814ccf9aa4b555da2e8f40-merged.mount: Deactivated successfully.
Jan 31 08:33:13 compute-0 podman[253731]: 2026-01-31 08:33:13.072392485 +0000 UTC m=+1.172631273 container remove 465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:33:13 compute-0 sudo[253653]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:13 compute-0 systemd[1]: libpod-conmon-465c1103ca50ac8b41a0734ab4cab28ac142d00c30beea13096b19bfebc6b34a.scope: Deactivated successfully.
Jan 31 08:33:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:33:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:33:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:33:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:33:13 compute-0 sudo[253863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:33:13 compute-0 sudo[253863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:13 compute-0 sudo[253863]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:13 compute-0 ceph-mon[75294]: pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:33:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:33:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:15 compute-0 ceph-mon[75294]: pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:16 compute-0 podman[253888]: 2026-01-31 08:33:16.198278714 +0000 UTC m=+0.065273958 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:33:17 compute-0 ceph-mon[75294]: pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:19 compute-0 ceph-mon[75294]: pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:21 compute-0 ceph-mon[75294]: pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:22 compute-0 ceph-mon[75294]: pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:25 compute-0 ceph-mon[75294]: pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:27 compute-0 ceph-mon[75294]: pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:29 compute-0 ceph-mon[75294]: pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:31 compute-0 ceph-mon[75294]: pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:33 compute-0 ceph-mon[75294]: pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:35 compute-0 ceph-mon[75294]: pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:37 compute-0 ceph-mon[75294]: pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:33:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642240984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:33:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:33:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642240984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:33:39 compute-0 ceph-mon[75294]: pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3642240984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:33:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3642240984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:33:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:41 compute-0 ceph-mon[75294]: pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:43 compute-0 podman[253914]: 2026-01-31 08:33:43.164544229 +0000 UTC m=+0.038559091 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:33:43 compute-0 ceph-mon[75294]: pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:45 compute-0 ceph-mon[75294]: pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:33:46.977 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:33:46.978 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:33:46.978 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:47 compute-0 podman[253934]: 2026-01-31 08:33:47.18618652 +0000 UTC m=+0.057678322 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:33:47 compute-0 ceph-mon[75294]: pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:48 compute-0 ceph-mon[75294]: pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:33:50
Jan 31 08:33:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:33:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:33:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'backups', 'default.rgw.log']
Jan 31 08:33:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:33:51 compute-0 ceph-mon[75294]: pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:53 compute-0 ceph-mon[75294]: pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:55 compute-0 ceph-mon[75294]: pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:33:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:33:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:57 compute-0 ceph-mon[75294]: pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:59 compute-0 ceph-mon[75294]: pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:01 compute-0 ceph-mon[75294]: pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:03 compute-0 ceph-mon[75294]: pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:05 compute-0 nova_compute[240062]: 2026-01-31 08:34:05.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:05 compute-0 nova_compute[240062]: 2026-01-31 08:34:05.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:05 compute-0 nova_compute[240062]: 2026-01-31 08:34:05.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:34:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:05 compute-0 ceph-mon[75294]: pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:34:06 compute-0 ceph-mon[75294]: pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.192 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.192 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.193 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.193 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.193 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:34:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550786432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.731 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.864 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.865 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.865 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:07 compute-0 nova_compute[240062]: 2026-01-31 08:34:07.865 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:07 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3550786432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:08 compute-0 nova_compute[240062]: 2026-01-31 08:34:08.156 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:34:08 compute-0 nova_compute[240062]: 2026-01-31 08:34:08.156 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:34:08 compute-0 nova_compute[240062]: 2026-01-31 08:34:08.171 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:34:09 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794716699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:09 compute-0 nova_compute[240062]: 2026-01-31 08:34:09.635 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:09 compute-0 nova_compute[240062]: 2026-01-31 08:34:09.640 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:09 compute-0 ceph-mon[75294]: pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:09 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1794716699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:09 compute-0 nova_compute[240062]: 2026-01-31 08:34:09.732 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:09 compute-0 nova_compute[240062]: 2026-01-31 08:34:09.735 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:34:09 compute-0 nova_compute[240062]: 2026-01-31 08:34:09.736 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:10 compute-0 nova_compute[240062]: 2026-01-31 08:34:10.737 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:10 compute-0 nova_compute[240062]: 2026-01-31 08:34:10.738 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:34:10 compute-0 nova_compute[240062]: 2026-01-31 08:34:10.738 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:34:10 compute-0 nova_compute[240062]: 2026-01-31 08:34:10.867 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:34:11 compute-0 nova_compute[240062]: 2026-01-31 08:34:11.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:11 compute-0 nova_compute[240062]: 2026-01-31 08:34:11.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:11 compute-0 ceph-mon[75294]: pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:12 compute-0 nova_compute[240062]: 2026-01-31 08:34:12.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:12 compute-0 ceph-mon[75294]: pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:13 compute-0 sudo[254005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:13 compute-0 sudo[254005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:13 compute-0 sudo[254005]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:13 compute-0 podman[254029]: 2026-01-31 08:34:13.354703699 +0000 UTC m=+0.049934376 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:34:13 compute-0 sudo[254036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:34:13 compute-0 sudo[254036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:13 compute-0 sudo[254036]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:34:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:34:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:34:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:34:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:34:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:34:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:34:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:34:13 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:34:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:34:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:13 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:34:13 compute-0 sudo[254106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:13 compute-0 sudo[254106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:13 compute-0 sudo[254106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:13 compute-0 sudo[254131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:34:13 compute-0 sudo[254131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:14 compute-0 nova_compute[240062]: 2026-01-31 08:34:14.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:14 compute-0 podman[254168]: 2026-01-31 08:34:14.20646792 +0000 UTC m=+0.018516203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:14 compute-0 podman[254168]: 2026-01-31 08:34:14.303442342 +0000 UTC m=+0.115490605 container create 51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williams, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:14 compute-0 systemd[1]: Started libpod-conmon-51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33.scope.
Jan 31 08:34:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:14 compute-0 podman[254168]: 2026-01-31 08:34:14.413694412 +0000 UTC m=+0.225742705 container init 51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williams, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:34:14 compute-0 podman[254168]: 2026-01-31 08:34:14.423130839 +0000 UTC m=+0.235179112 container start 51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:34:14 compute-0 xenodochial_williams[254185]: 167 167
Jan 31 08:34:14 compute-0 systemd[1]: libpod-51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33.scope: Deactivated successfully.
Jan 31 08:34:14 compute-0 conmon[254185]: conmon 51a3dbcdcd433b0dcc28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33.scope/container/memory.events
Jan 31 08:34:14 compute-0 podman[254168]: 2026-01-31 08:34:14.431699091 +0000 UTC m=+0.243747374 container attach 51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williams, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:14 compute-0 podman[254168]: 2026-01-31 08:34:14.432926115 +0000 UTC m=+0.244974398 container died 51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-854ae51e5b5b32ac6bf90ffaf7a221f705aab47ee010be461db3dcc380a8926a-merged.mount: Deactivated successfully.
Jan 31 08:34:14 compute-0 podman[254168]: 2026-01-31 08:34:14.552554931 +0000 UTC m=+0.364603204 container remove 51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williams, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:34:14 compute-0 systemd[1]: libpod-conmon-51a3dbcdcd433b0dcc28a089e363ad3b4a50bd36938c86689a088f63bed3ed33.scope: Deactivated successfully.
Jan 31 08:34:14 compute-0 podman[254211]: 2026-01-31 08:34:14.671257141 +0000 UTC m=+0.043913112 container create 5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:34:14 compute-0 systemd[1]: Started libpod-conmon-5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e.scope.
Jan 31 08:34:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:14 compute-0 podman[254211]: 2026-01-31 08:34:14.648911015 +0000 UTC m=+0.021567006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1c246edeb12663ac63ccf343ce418b39bff71f02094e5daa91f050836ebf4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1c246edeb12663ac63ccf343ce418b39bff71f02094e5daa91f050836ebf4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1c246edeb12663ac63ccf343ce418b39bff71f02094e5daa91f050836ebf4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1c246edeb12663ac63ccf343ce418b39bff71f02094e5daa91f050836ebf4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1c246edeb12663ac63ccf343ce418b39bff71f02094e5daa91f050836ebf4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:14 compute-0 podman[254211]: 2026-01-31 08:34:14.776066915 +0000 UTC m=+0.148722906 container init 5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:34:14 compute-0 podman[254211]: 2026-01-31 08:34:14.781485712 +0000 UTC m=+0.154141683 container start 5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:34:14 compute-0 podman[254211]: 2026-01-31 08:34:14.795270596 +0000 UTC m=+0.167926567 container attach 5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:34:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:34:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:34:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:34:14 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:14 compute-0 ceph-mon[75294]: pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:15 compute-0 jolly_stonebraker[254227]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:34:15 compute-0 jolly_stonebraker[254227]: --> All data devices are unavailable
Jan 31 08:34:15 compute-0 systemd[1]: libpod-5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e.scope: Deactivated successfully.
Jan 31 08:34:15 compute-0 podman[254211]: 2026-01-31 08:34:15.173426897 +0000 UTC m=+0.546082878 container died 5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a1c246edeb12663ac63ccf343ce418b39bff71f02094e5daa91f050836ebf4c-merged.mount: Deactivated successfully.
Jan 31 08:34:15 compute-0 podman[254211]: 2026-01-31 08:34:15.426285519 +0000 UTC m=+0.798941490 container remove 5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:34:15 compute-0 systemd[1]: libpod-conmon-5d8e6186331f527dd09d5dbd7f4dd3373858207fce3c0f1f45374712f2702a2e.scope: Deactivated successfully.
Jan 31 08:34:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:15 compute-0 sudo[254131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:15 compute-0 sudo[254260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:15 compute-0 sudo[254260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:15 compute-0 sudo[254260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:15 compute-0 sudo[254285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:34:15 compute-0 sudo[254285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:15 compute-0 podman[254322]: 2026-01-31 08:34:15.841282189 +0000 UTC m=+0.037538410 container create ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mahavira, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:34:15 compute-0 systemd[1]: Started libpod-conmon-ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec.scope.
Jan 31 08:34:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:15 compute-0 podman[254322]: 2026-01-31 08:34:15.821389819 +0000 UTC m=+0.017646060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:15 compute-0 podman[254322]: 2026-01-31 08:34:15.919805649 +0000 UTC m=+0.116061890 container init ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:34:15 compute-0 podman[254322]: 2026-01-31 08:34:15.926278075 +0000 UTC m=+0.122534296 container start ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:15 compute-0 inspiring_mahavira[254338]: 167 167
Jan 31 08:34:15 compute-0 systemd[1]: libpod-ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec.scope: Deactivated successfully.
Jan 31 08:34:15 compute-0 podman[254322]: 2026-01-31 08:34:15.930864309 +0000 UTC m=+0.127120540 container attach ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mahavira, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:15 compute-0 podman[254322]: 2026-01-31 08:34:15.931863657 +0000 UTC m=+0.128119878 container died ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eaf510d16ca0193462634dc3138a51a9876a4759eea74b7f225136cda37180c-merged.mount: Deactivated successfully.
Jan 31 08:34:15 compute-0 podman[254322]: 2026-01-31 08:34:15.990752585 +0000 UTC m=+0.187008806 container remove ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mahavira, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:16 compute-0 systemd[1]: libpod-conmon-ed3c9d84d6300c6c9b619a658e8da1cfcfa154af9b37a1016f1f404578e7d0ec.scope: Deactivated successfully.
Jan 31 08:34:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:16 compute-0 podman[254361]: 2026-01-31 08:34:16.128949384 +0000 UTC m=+0.056317259 container create a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ishizaka, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:34:16 compute-0 podman[254361]: 2026-01-31 08:34:16.091448517 +0000 UTC m=+0.018816412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:16 compute-0 systemd[1]: Started libpod-conmon-a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7.scope.
Jan 31 08:34:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f51bb050e158bac7de1388689e9b8f1df546c17dc4620ee2c07b58762e81224/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f51bb050e158bac7de1388689e9b8f1df546c17dc4620ee2c07b58762e81224/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f51bb050e158bac7de1388689e9b8f1df546c17dc4620ee2c07b58762e81224/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f51bb050e158bac7de1388689e9b8f1df546c17dc4620ee2c07b58762e81224/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:16 compute-0 podman[254361]: 2026-01-31 08:34:16.269633321 +0000 UTC m=+0.197001226 container init a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:34:16 compute-0 podman[254361]: 2026-01-31 08:34:16.27401966 +0000 UTC m=+0.201387535 container start a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:34:16 compute-0 podman[254361]: 2026-01-31 08:34:16.363629562 +0000 UTC m=+0.290997467 container attach a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]: {
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:     "0": [
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:         {
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "devices": [
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "/dev/loop3"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             ],
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_name": "ceph_lv0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_size": "21470642176",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "name": "ceph_lv0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "tags": {
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cluster_name": "ceph",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.crush_device_class": "",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.encrypted": "0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.objectstore": "bluestore",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osd_id": "0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.type": "block",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.vdo": "0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.with_tpm": "0"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             },
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "type": "block",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "vg_name": "ceph_vg0"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:         }
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:     ],
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:     "1": [
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:         {
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "devices": [
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "/dev/loop4"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             ],
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_name": "ceph_lv1",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_size": "21470642176",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "name": "ceph_lv1",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "tags": {
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cluster_name": "ceph",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.crush_device_class": "",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.encrypted": "0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.objectstore": "bluestore",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osd_id": "1",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.type": "block",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.vdo": "0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.with_tpm": "0"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             },
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "type": "block",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "vg_name": "ceph_vg1"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:         }
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:     ],
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:     "2": [
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:         {
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "devices": [
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "/dev/loop5"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             ],
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_name": "ceph_lv2",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_size": "21470642176",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "name": "ceph_lv2",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "tags": {
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.cluster_name": "ceph",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.crush_device_class": "",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.encrypted": "0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.objectstore": "bluestore",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osd_id": "2",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.type": "block",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.vdo": "0",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:                 "ceph.with_tpm": "0"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             },
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "type": "block",
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:             "vg_name": "ceph_vg2"
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:         }
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]:     ]
Jan 31 08:34:16 compute-0 determined_ishizaka[254377]: }
Jan 31 08:34:16 compute-0 systemd[1]: libpod-a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7.scope: Deactivated successfully.
Jan 31 08:34:16 compute-0 conmon[254377]: conmon a9dcdaf025c0f25b630c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7.scope/container/memory.events
Jan 31 08:34:16 compute-0 podman[254361]: 2026-01-31 08:34:16.535813734 +0000 UTC m=+0.463181609 container died a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:34:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f51bb050e158bac7de1388689e9b8f1df546c17dc4620ee2c07b58762e81224-merged.mount: Deactivated successfully.
Jan 31 08:34:17 compute-0 podman[254361]: 2026-01-31 08:34:17.10251069 +0000 UTC m=+1.029878565 container remove a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:34:17 compute-0 systemd[1]: libpod-conmon-a9dcdaf025c0f25b630c6307730961c3e5af0afe7445617ee30116dbdd36d5f7.scope: Deactivated successfully.
Jan 31 08:34:17 compute-0 sudo[254285]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:17 compute-0 nova_compute[240062]: 2026-01-31 08:34:17.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:17 compute-0 sudo[254400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:17 compute-0 sudo[254400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:17 compute-0 sudo[254400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:17 compute-0 sudo[254425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:34:17 compute-0 sudo[254425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:17 compute-0 ceph-mon[75294]: pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:17 compute-0 podman[254449]: 2026-01-31 08:34:17.345867594 +0000 UTC m=+0.087421344 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.541046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848457541094, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1193, "num_deletes": 251, "total_data_size": 1799575, "memory_usage": 1831088, "flush_reason": "Manual Compaction"}
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 08:34:17 compute-0 podman[254488]: 2026-01-31 08:34:17.564842795 +0000 UTC m=+0.100699064 container create 89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_meninsky, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848457576481, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1782322, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24604, "largest_seqno": 25796, "table_properties": {"data_size": 1776608, "index_size": 3109, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12097, "raw_average_key_size": 19, "raw_value_size": 1765147, "raw_average_value_size": 2888, "num_data_blocks": 139, "num_entries": 611, "num_filter_entries": 611, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848335, "oldest_key_time": 1769848335, "file_creation_time": 1769848457, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 35478 microseconds, and 3531 cpu microseconds.
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:34:17 compute-0 podman[254488]: 2026-01-31 08:34:17.485330617 +0000 UTC m=+0.021186906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.576527) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1782322 bytes OK
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.576545) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.639046) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.639096) EVENT_LOG_v1 {"time_micros": 1769848457639085, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.639123) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1794141, prev total WAL file size 1794141, number of live WAL files 2.
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.639781) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1740KB)], [56(7738KB)]
Jan 31 08:34:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848457639827, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9706208, "oldest_snapshot_seqno": -1}
Jan 31 08:34:17 compute-0 systemd[1]: Started libpod-conmon-89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c.scope.
Jan 31 08:34:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4770 keys, 7958319 bytes, temperature: kUnknown
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848458140573, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7958319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7925961, "index_size": 19320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119400, "raw_average_key_size": 25, "raw_value_size": 7839219, "raw_average_value_size": 1643, "num_data_blocks": 797, "num_entries": 4770, "num_filter_entries": 4770, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769848457, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:34:18 compute-0 podman[254488]: 2026-01-31 08:34:18.150419604 +0000 UTC m=+0.686275873 container init 89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_meninsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:34:18 compute-0 podman[254488]: 2026-01-31 08:34:18.155461561 +0000 UTC m=+0.691317830 container start 89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:34:18 compute-0 cranky_meninsky[254504]: 167 167
Jan 31 08:34:18 compute-0 systemd[1]: libpod-89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c.scope: Deactivated successfully.
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.140835) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7958319 bytes
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.194144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.4 rd, 15.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.6 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(9.9) write-amplify(4.5) OK, records in: 5284, records dropped: 514 output_compression: NoCompression
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.194199) EVENT_LOG_v1 {"time_micros": 1769848458194178, "job": 30, "event": "compaction_finished", "compaction_time_micros": 500867, "compaction_time_cpu_micros": 12145, "output_level": 6, "num_output_files": 1, "total_output_size": 7958319, "num_input_records": 5284, "num_output_records": 4770, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848458194648, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848458195363, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:17.639625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.195443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.195452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.195454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.195456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:34:18.195458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:18 compute-0 podman[254488]: 2026-01-31 08:34:18.250134959 +0000 UTC m=+0.785991228 container attach 89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:34:18 compute-0 podman[254488]: 2026-01-31 08:34:18.250555001 +0000 UTC m=+0.786411270 container died 89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_meninsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-01a3313d78602b67589951bcc2b463e7dfeec91ebddf4db7ce4ef55a3dd8dcd2-merged.mount: Deactivated successfully.
Jan 31 08:34:18 compute-0 podman[254488]: 2026-01-31 08:34:18.633761288 +0000 UTC m=+1.169617557 container remove 89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:34:18 compute-0 systemd[1]: libpod-conmon-89b1b696d7d972179d53918d6665df0ba1ec942fad763a0f7b9abad4d135655c.scope: Deactivated successfully.
Jan 31 08:34:18 compute-0 podman[254528]: 2026-01-31 08:34:18.800109462 +0000 UTC m=+0.085714586 container create 64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lumiere, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:34:18 compute-0 podman[254528]: 2026-01-31 08:34:18.745389878 +0000 UTC m=+0.030995062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:18 compute-0 systemd[1]: Started libpod-conmon-64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153.scope.
Jan 31 08:34:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3ad5a413b79000ced1b9af43a6847cc57aeb6c4f0a2ec769683329b676ebe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3ad5a413b79000ced1b9af43a6847cc57aeb6c4f0a2ec769683329b676ebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3ad5a413b79000ced1b9af43a6847cc57aeb6c4f0a2ec769683329b676ebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3ad5a413b79000ced1b9af43a6847cc57aeb6c4f0a2ec769683329b676ebe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:18 compute-0 podman[254528]: 2026-01-31 08:34:18.981938716 +0000 UTC m=+0.267543870 container init 64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:34:18 compute-0 podman[254528]: 2026-01-31 08:34:18.987747124 +0000 UTC m=+0.273352238 container start 64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:34:19 compute-0 podman[254528]: 2026-01-31 08:34:19.069829851 +0000 UTC m=+0.355435015 container attach 64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:34:19 compute-0 lvm[254623]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:34:19 compute-0 lvm[254624]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:34:19 compute-0 lvm[254624]: VG ceph_vg1 finished
Jan 31 08:34:19 compute-0 lvm[254623]: VG ceph_vg0 finished
Jan 31 08:34:19 compute-0 lvm[254626]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:34:19 compute-0 lvm[254626]: VG ceph_vg2 finished
Jan 31 08:34:19 compute-0 ceph-mon[75294]: pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:19 compute-0 quizzical_lumiere[254545]: {}
Jan 31 08:34:19 compute-0 systemd[1]: libpod-64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153.scope: Deactivated successfully.
Jan 31 08:34:19 compute-0 systemd[1]: libpod-64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153.scope: Consumed 1.078s CPU time.
Jan 31 08:34:19 compute-0 podman[254528]: 2026-01-31 08:34:19.705229422 +0000 UTC m=+0.990834566 container died 64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c3ad5a413b79000ced1b9af43a6847cc57aeb6c4f0a2ec769683329b676ebe-merged.mount: Deactivated successfully.
Jan 31 08:34:19 compute-0 podman[254528]: 2026-01-31 08:34:19.769540897 +0000 UTC m=+1.055146021 container remove 64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lumiere, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:34:19 compute-0 systemd[1]: libpod-conmon-64cb01eb909b07a06b0e7a8ef467076bff249e1b463ad3bae23177c08a04f153.scope: Deactivated successfully.
Jan 31 08:34:19 compute-0 sudo[254425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:34:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:34:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:34:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:34:19 compute-0 sudo[254644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:34:19 compute-0 sudo[254644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:19 compute-0 sudo[254644]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:34:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:34:20 compute-0 ceph-mon[75294]: pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:23 compute-0 ceph-mon[75294]: pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:25 compute-0 ceph-mon[75294]: pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:27 compute-0 ceph-mon[75294]: pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:30 compute-0 ceph-mon[75294]: pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:31 compute-0 ceph-mon[75294]: pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:33 compute-0 ceph-mon[75294]: pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:34 compute-0 ceph-mon[75294]: pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:37 compute-0 ceph-mon[75294]: pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:34:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638061026' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:34:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:34:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638061026' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:34:39 compute-0 ceph-mon[75294]: pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1638061026' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:34:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1638061026' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:34:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:41 compute-0 ceph-mon[75294]: pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:43 compute-0 ceph-mon[75294]: pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:44 compute-0 podman[254669]: 2026-01-31 08:34:44.241834134 +0000 UTC m=+0.102519333 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:34:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:45 compute-0 ceph-mon[75294]: pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:34:46.978 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:34:46.979 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:34:46.979 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:47 compute-0 ceph-mon[75294]: pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:48 compute-0 podman[254689]: 2026-01-31 08:34:48.205785929 +0000 UTC m=+0.074355608 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:34:49 compute-0 ceph-mon[75294]: pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:34:50
Jan 31 08:34:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:34:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:34:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root']
Jan 31 08:34:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:34:51 compute-0 ceph-mon[75294]: pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:53 compute-0 ceph-mon[75294]: pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:53 compute-0 sshd-session[254716]: Invalid user sol from 193.32.162.145 port 37372
Jan 31 08:34:54 compute-0 sshd-session[254716]: Connection closed by invalid user sol 193.32.162.145 port 37372 [preauth]
Jan 31 08:34:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:55 compute-0 ceph-mon[75294]: pgmap v1283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:34:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:34:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:57 compute-0 ceph-mon[75294]: pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:59 compute-0 ceph-mon[75294]: pgmap v1285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:00 compute-0 ceph-mon[75294]: pgmap v1286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:03 compute-0 ceph-mon[75294]: pgmap v1287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:05 compute-0 ceph-mon[75294]: pgmap v1288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:06 compute-0 nova_compute[240062]: 2026-01-31 08:35:06.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:06 compute-0 nova_compute[240062]: 2026-01-31 08:35:06.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:35:07 compute-0 nova_compute[240062]: 2026-01-31 08:35:07.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:07 compute-0 ceph-mon[75294]: pgmap v1289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:08 compute-0 nova_compute[240062]: 2026-01-31 08:35:08.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:08 compute-0 nova_compute[240062]: 2026-01-31 08:35:08.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:35:08 compute-0 nova_compute[240062]: 2026-01-31 08:35:08.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:35:08 compute-0 nova_compute[240062]: 2026-01-31 08:35:08.177 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:35:08 compute-0 nova_compute[240062]: 2026-01-31 08:35:08.177 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:09 compute-0 ceph-mon[75294]: pgmap v1290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.248 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.248 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.248 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.249 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.249 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:35:09 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576448297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.778 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.907 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.908 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.909 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:09 compute-0 nova_compute[240062]: 2026-01-31 08:35:09.909 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:10 compute-0 nova_compute[240062]: 2026-01-31 08:35:10.114 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:35:10 compute-0 nova_compute[240062]: 2026-01-31 08:35:10.114 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:35:10 compute-0 nova_compute[240062]: 2026-01-31 08:35:10.128 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:10 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2576448297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:35:10 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/674708340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:10 compute-0 nova_compute[240062]: 2026-01-31 08:35:10.998 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.870s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:11 compute-0 nova_compute[240062]: 2026-01-31 08:35:11.002 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:35:11 compute-0 nova_compute[240062]: 2026-01-31 08:35:11.141 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:35:11 compute-0 nova_compute[240062]: 2026-01-31 08:35:11.143 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:35:11 compute-0 nova_compute[240062]: 2026-01-31 08:35:11.143 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:11 compute-0 ceph-mon[75294]: pgmap v1291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:11 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/674708340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:13 compute-0 ceph-mon[75294]: pgmap v1292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:14 compute-0 nova_compute[240062]: 2026-01-31 08:35:14.137 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:14 compute-0 nova_compute[240062]: 2026-01-31 08:35:14.137 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:14 compute-0 nova_compute[240062]: 2026-01-31 08:35:14.137 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:15 compute-0 ceph-mon[75294]: pgmap v1293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:15 compute-0 nova_compute[240062]: 2026-01-31 08:35:15.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:15 compute-0 podman[254762]: 2026-01-31 08:35:15.172602591 +0000 UTC m=+0.043594994 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:35:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:17 compute-0 ceph-mon[75294]: pgmap v1294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:19 compute-0 podman[254781]: 2026-01-31 08:35:19.200952865 +0000 UTC m=+0.076474796 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:35:19 compute-0 ceph-mon[75294]: pgmap v1295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:19 compute-0 sudo[254807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:35:19 compute-0 sudo[254807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:19 compute-0 sudo[254807]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:19 compute-0 sudo[254832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:35:19 compute-0 sudo[254832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:20 compute-0 sudo[254832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:35:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:35:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:35:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:35:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:35:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:35:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:35:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:35:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:35:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:35:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:35:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:35:20 compute-0 sudo[254888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:35:20 compute-0 sudo[254888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:20 compute-0 sudo[254888]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:20 compute-0 sudo[254913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:35:20 compute-0 sudo[254913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:20 compute-0 podman[254950]: 2026-01-31 08:35:20.824825977 +0000 UTC m=+0.023335025 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:35:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:35:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:35:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:35:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:35:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:35:21 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:35:21 compute-0 podman[254950]: 2026-01-31 08:35:21.174112855 +0000 UTC m=+0.372621873 container create 865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:35:21 compute-0 systemd[1]: Started libpod-conmon-865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d.scope.
Jan 31 08:35:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:35:21 compute-0 podman[254950]: 2026-01-31 08:35:21.962952879 +0000 UTC m=+1.161461917 container init 865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:35:21 compute-0 podman[254950]: 2026-01-31 08:35:21.968111429 +0000 UTC m=+1.166620437 container start 865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:35:21 compute-0 amazing_brahmagupta[254966]: 167 167
Jan 31 08:35:21 compute-0 systemd[1]: libpod-865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d.scope: Deactivated successfully.
Jan 31 08:35:22 compute-0 podman[254950]: 2026-01-31 08:35:22.112514657 +0000 UTC m=+1.311023665 container attach 865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:35:22 compute-0 podman[254950]: 2026-01-31 08:35:22.113829943 +0000 UTC m=+1.312338951 container died 865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:35:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:22 compute-0 ceph-mon[75294]: pgmap v1296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c03e8e4bdc46fbc571931ddc88eee53017e22a494c46394ece71bc3eb500407-merged.mount: Deactivated successfully.
Jan 31 08:35:23 compute-0 podman[254950]: 2026-01-31 08:35:23.223261005 +0000 UTC m=+2.421770023 container remove 865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:35:23 compute-0 systemd[1]: libpod-conmon-865205a201bf6adfd7e4455b95abd83483544393520a81c0deb5372168da4a9d.scope: Deactivated successfully.
Jan 31 08:35:23 compute-0 podman[254991]: 2026-01-31 08:35:23.332554411 +0000 UTC m=+0.018836682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:35:23 compute-0 podman[254991]: 2026-01-31 08:35:23.514902188 +0000 UTC m=+0.201184439 container create 45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_torvalds, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:35:23 compute-0 ceph-mon[75294]: pgmap v1297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:23 compute-0 systemd[1]: Started libpod-conmon-45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032.scope.
Jan 31 08:35:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a823e406411f749e3799ac0984aa0e291e0365a8f15f95950502f6a0c3df00f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a823e406411f749e3799ac0984aa0e291e0365a8f15f95950502f6a0c3df00f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a823e406411f749e3799ac0984aa0e291e0365a8f15f95950502f6a0c3df00f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a823e406411f749e3799ac0984aa0e291e0365a8f15f95950502f6a0c3df00f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a823e406411f749e3799ac0984aa0e291e0365a8f15f95950502f6a0c3df00f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:23 compute-0 podman[254991]: 2026-01-31 08:35:23.834562022 +0000 UTC m=+0.520844283 container init 45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_torvalds, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:35:23 compute-0 podman[254991]: 2026-01-31 08:35:23.842342863 +0000 UTC m=+0.528625104 container start 45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:35:23 compute-0 podman[254991]: 2026-01-31 08:35:23.956788368 +0000 UTC m=+0.643070639 container attach 45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_torvalds, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:35:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:24 compute-0 busy_torvalds[255007]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:35:24 compute-0 busy_torvalds[255007]: --> All data devices are unavailable
Jan 31 08:35:24 compute-0 systemd[1]: libpod-45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032.scope: Deactivated successfully.
Jan 31 08:35:24 compute-0 podman[254991]: 2026-01-31 08:35:24.242161622 +0000 UTC m=+0.928443873 container died 45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:35:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a823e406411f749e3799ac0984aa0e291e0365a8f15f95950502f6a0c3df00f-merged.mount: Deactivated successfully.
Jan 31 08:35:25 compute-0 ceph-mon[75294]: pgmap v1298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:25 compute-0 podman[254991]: 2026-01-31 08:35:25.990622323 +0000 UTC m=+2.676904564 container remove 45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:35:26 compute-0 systemd[1]: libpod-conmon-45609851fa1ba501f3bd8b9679f0e661222543627f62503ec16b33ec1c956032.scope: Deactivated successfully.
Jan 31 08:35:26 compute-0 sudo[254913]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:26 compute-0 sudo[255039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:35:26 compute-0 sudo[255039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:26 compute-0 sudo[255039]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:26 compute-0 sudo[255064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:35:26 compute-0 sudo[255064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:26 compute-0 podman[255100]: 2026-01-31 08:35:26.362546674 +0000 UTC m=+0.018522343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:35:26 compute-0 podman[255100]: 2026-01-31 08:35:26.555543331 +0000 UTC m=+0.211518940 container create c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:35:26 compute-0 systemd[1]: Started libpod-conmon-c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43.scope.
Jan 31 08:35:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:35:27 compute-0 podman[255100]: 2026-01-31 08:35:27.36998883 +0000 UTC m=+1.025964479 container init c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cori, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:35:27 compute-0 podman[255100]: 2026-01-31 08:35:27.380665329 +0000 UTC m=+1.036640948 container start c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:35:27 compute-0 cranky_cori[255116]: 167 167
Jan 31 08:35:27 compute-0 systemd[1]: libpod-c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43.scope: Deactivated successfully.
Jan 31 08:35:27 compute-0 ceph-mon[75294]: pgmap v1299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:27 compute-0 podman[255100]: 2026-01-31 08:35:27.520195685 +0000 UTC m=+1.176171324 container attach c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cori, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:35:27 compute-0 podman[255100]: 2026-01-31 08:35:27.520830162 +0000 UTC m=+1.176805801 container died c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 31 08:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa73a24677f68f7556e3ec944b888f46f95114ba967e6c05a2a0f366c97d6247-merged.mount: Deactivated successfully.
Jan 31 08:35:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:28 compute-0 podman[255100]: 2026-01-31 08:35:28.160078807 +0000 UTC m=+1.816054426 container remove c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cori, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:35:28 compute-0 systemd[1]: libpod-conmon-c4e2b9dbc32d208c4c4b39c4c4f92172e8928f0ff4533e2878ad4fff0207ad43.scope: Deactivated successfully.
Jan 31 08:35:28 compute-0 podman[255140]: 2026-01-31 08:35:28.26413138 +0000 UTC m=+0.017052663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:35:28 compute-0 podman[255140]: 2026-01-31 08:35:28.431969085 +0000 UTC m=+0.184890338 container create 320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:35:28 compute-0 systemd[1]: Started libpod-conmon-320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92.scope.
Jan 31 08:35:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/668c172c7383c91e1c03d2b875ff538f54c5c313947a1655bda262605476ce5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/668c172c7383c91e1c03d2b875ff538f54c5c313947a1655bda262605476ce5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/668c172c7383c91e1c03d2b875ff538f54c5c313947a1655bda262605476ce5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/668c172c7383c91e1c03d2b875ff538f54c5c313947a1655bda262605476ce5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:28 compute-0 podman[255140]: 2026-01-31 08:35:28.754493897 +0000 UTC m=+0.507415170 container init 320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:35:28 compute-0 podman[255140]: 2026-01-31 08:35:28.763224714 +0000 UTC m=+0.516145977 container start 320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:35:28 compute-0 podman[255140]: 2026-01-31 08:35:28.876330163 +0000 UTC m=+0.629251516 container attach 320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:35:29 compute-0 agitated_joliot[255157]: {
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:     "0": [
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:         {
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "devices": [
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "/dev/loop3"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             ],
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_name": "ceph_lv0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_size": "21470642176",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "name": "ceph_lv0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "tags": {
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cluster_name": "ceph",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.crush_device_class": "",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.encrypted": "0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.objectstore": "bluestore",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osd_id": "0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.type": "block",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.vdo": "0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.with_tpm": "0"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             },
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "type": "block",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "vg_name": "ceph_vg0"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:         }
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:     ],
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:     "1": [
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:         {
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "devices": [
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "/dev/loop4"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             ],
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_name": "ceph_lv1",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_size": "21470642176",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "name": "ceph_lv1",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "tags": {
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cluster_name": "ceph",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.crush_device_class": "",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.encrypted": "0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.objectstore": "bluestore",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osd_id": "1",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.type": "block",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.vdo": "0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.with_tpm": "0"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             },
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "type": "block",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "vg_name": "ceph_vg1"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:         }
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:     ],
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:     "2": [
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:         {
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "devices": [
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "/dev/loop5"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             ],
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_name": "ceph_lv2",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_size": "21470642176",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "name": "ceph_lv2",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "tags": {
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.cluster_name": "ceph",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.crush_device_class": "",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.encrypted": "0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.objectstore": "bluestore",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osd_id": "2",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.type": "block",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.vdo": "0",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:                 "ceph.with_tpm": "0"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             },
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "type": "block",
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:             "vg_name": "ceph_vg2"
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:         }
Jan 31 08:35:29 compute-0 agitated_joliot[255157]:     ]
Jan 31 08:35:29 compute-0 agitated_joliot[255157]: }
Jan 31 08:35:29 compute-0 systemd[1]: libpod-320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92.scope: Deactivated successfully.
Jan 31 08:35:29 compute-0 podman[255140]: 2026-01-31 08:35:29.053320245 +0000 UTC m=+0.806241598 container died 320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-668c172c7383c91e1c03d2b875ff538f54c5c313947a1655bda262605476ce5f-merged.mount: Deactivated successfully.
Jan 31 08:35:29 compute-0 ceph-mon[75294]: pgmap v1300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:30 compute-0 podman[255140]: 2026-01-31 08:35:30.072574021 +0000 UTC m=+1.825495284 container remove 320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:35:30 compute-0 systemd[1]: libpod-conmon-320f57c39fad6f6bb297d15e01857fb24732c1ba7b4db9e1c4dd421e33159e92.scope: Deactivated successfully.
Jan 31 08:35:30 compute-0 sudo[255064]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:30 compute-0 sudo[255178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:35:30 compute-0 sudo[255178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:30 compute-0 sudo[255178]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:30 compute-0 sudo[255203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:35:30 compute-0 sudo[255203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:30 compute-0 podman[255240]: 2026-01-31 08:35:30.535155732 +0000 UTC m=+0.114816336 container create a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:35:30 compute-0 podman[255240]: 2026-01-31 08:35:30.438736116 +0000 UTC m=+0.018396740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:35:30 compute-0 systemd[1]: Started libpod-conmon-a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e.scope.
Jan 31 08:35:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:35:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:30 compute-0 podman[255240]: 2026-01-31 08:35:30.909377616 +0000 UTC m=+0.489038260 container init a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:35:30 compute-0 podman[255240]: 2026-01-31 08:35:30.914672899 +0000 UTC m=+0.494333503 container start a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:35:30 compute-0 cranky_brattain[255256]: 167 167
Jan 31 08:35:30 compute-0 systemd[1]: libpod-a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e.scope: Deactivated successfully.
Jan 31 08:35:31 compute-0 podman[255240]: 2026-01-31 08:35:31.082068852 +0000 UTC m=+0.661729476 container attach a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:35:31 compute-0 podman[255240]: 2026-01-31 08:35:31.082434991 +0000 UTC m=+0.662095595 container died a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:35:31 compute-0 ceph-mon[75294]: pgmap v1301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f563b7aa9815eba4511fd3c043fb53c89fc7f43539f37c2f20b499b8c788f8b2-merged.mount: Deactivated successfully.
Jan 31 08:35:31 compute-0 podman[255240]: 2026-01-31 08:35:31.814130315 +0000 UTC m=+1.393790919 container remove a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brattain, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:35:31 compute-0 systemd[1]: libpod-conmon-a25db1a62d2af44804d727c4a4ac93a7775fe02c8e64f1f58e905f6b9010ae4e.scope: Deactivated successfully.
Jan 31 08:35:32 compute-0 podman[255280]: 2026-01-31 08:35:31.909870612 +0000 UTC m=+0.018688177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:35:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:32 compute-0 podman[255280]: 2026-01-31 08:35:32.179309393 +0000 UTC m=+0.288126888 container create ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:35:32 compute-0 systemd[1]: Started libpod-conmon-ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a.scope.
Jan 31 08:35:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe347e2b6e86fc8b4e7f524de55b556847b14e5cd62589b52646c55b00d62974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe347e2b6e86fc8b4e7f524de55b556847b14e5cd62589b52646c55b00d62974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe347e2b6e86fc8b4e7f524de55b556847b14e5cd62589b52646c55b00d62974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe347e2b6e86fc8b4e7f524de55b556847b14e5cd62589b52646c55b00d62974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:35:32 compute-0 podman[255280]: 2026-01-31 08:35:32.373970316 +0000 UTC m=+0.482787821 container init ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:35:32 compute-0 podman[255280]: 2026-01-31 08:35:32.378903749 +0000 UTC m=+0.487721244 container start ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:35:32 compute-0 podman[255280]: 2026-01-31 08:35:32.615148109 +0000 UTC m=+0.723965624 container attach ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:35:32 compute-0 lvm[255373]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:35:32 compute-0 lvm[255373]: VG ceph_vg0 finished
Jan 31 08:35:32 compute-0 lvm[255376]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:35:32 compute-0 lvm[255376]: VG ceph_vg1 finished
Jan 31 08:35:33 compute-0 lvm[255378]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:35:33 compute-0 lvm[255378]: VG ceph_vg2 finished
Jan 31 08:35:33 compute-0 jovial_archimedes[255297]: {}
Jan 31 08:35:33 compute-0 systemd[1]: libpod-ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a.scope: Deactivated successfully.
Jan 31 08:35:33 compute-0 podman[255280]: 2026-01-31 08:35:33.142830407 +0000 UTC m=+1.251647902 container died ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:35:33 compute-0 systemd[1]: libpod-ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a.scope: Consumed 1.018s CPU time.
Jan 31 08:35:33 compute-0 ceph-mon[75294]: pgmap v1302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe347e2b6e86fc8b4e7f524de55b556847b14e5cd62589b52646c55b00d62974-merged.mount: Deactivated successfully.
Jan 31 08:35:33 compute-0 podman[255280]: 2026-01-31 08:35:33.913510369 +0000 UTC m=+2.022327914 container remove ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:35:33 compute-0 sudo[255203]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:35:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:35:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:35:34 compute-0 systemd[1]: libpod-conmon-ac99552c8bb38dc28a12d586cb7500c0e21467d23b6e8c1d4e98c895e8c57c7a.scope: Deactivated successfully.
Jan 31 08:35:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:35:34 compute-0 sudo[255393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:35:34 compute-0 sudo[255393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:34 compute-0 sudo[255393]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:35:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:35:35 compute-0 ceph-mon[75294]: pgmap v1303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:37 compute-0 ceph-mon[75294]: pgmap v1304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:35:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702057494' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:35:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:35:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702057494' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:35:39 compute-0 ceph-mon[75294]: pgmap v1305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1702057494' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:35:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1702057494' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:35:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:41 compute-0 ceph-mon[75294]: pgmap v1306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:44 compute-0 ceph-mon[75294]: pgmap v1307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:45 compute-0 ceph-mon[75294]: pgmap v1308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:46 compute-0 podman[255418]: 2026-01-31 08:35:46.176441026 +0000 UTC m=+0.047100478 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 31 08:35:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:35:46.979 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:35:46.980 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:35:46.980 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:47 compute-0 ceph-mon[75294]: pgmap v1309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:47 compute-0 sshd-session[255438]: Invalid user solana from 80.94.92.182 port 35728
Jan 31 08:35:47 compute-0 sshd-session[255438]: Connection closed by invalid user solana 80.94.92.182 port 35728 [preauth]
Jan 31 08:35:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:49 compute-0 ceph-mon[75294]: pgmap v1310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:50 compute-0 podman[255440]: 2026-01-31 08:35:50.224541676 +0000 UTC m=+0.098412932 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:35:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:35:50
Jan 31 08:35:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:35:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:35:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr']
Jan 31 08:35:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:35:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:51 compute-0 ceph-mon[75294]: pgmap v1311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:53 compute-0 ceph-mon[75294]: pgmap v1312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:54 compute-0 ceph-mon[75294]: pgmap v1313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:35:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:35:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:57 compute-0 ceph-mon[75294]: pgmap v1314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:59 compute-0 ceph-mon[75294]: pgmap v1315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:01 compute-0 ceph-mon[75294]: pgmap v1316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:03 compute-0 ceph-mon[75294]: pgmap v1317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:05 compute-0 ceph-mon[75294]: pgmap v1318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:36:07 compute-0 ceph-mon[75294]: pgmap v1319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:08 compute-0 nova_compute[240062]: 2026-01-31 08:36:08.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:08 compute-0 nova_compute[240062]: 2026-01-31 08:36:08.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:36:08 compute-0 nova_compute[240062]: 2026-01-31 08:36:08.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:36:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:09 compute-0 ceph-mon[75294]: pgmap v1320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:11 compute-0 nova_compute[240062]: 2026-01-31 08:36:11.311 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:36:11 compute-0 nova_compute[240062]: 2026-01-31 08:36:11.311 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:11 compute-0 nova_compute[240062]: 2026-01-31 08:36:11.311 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:11 compute-0 nova_compute[240062]: 2026-01-31 08:36:11.312 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:11 compute-0 nova_compute[240062]: 2026-01-31 08:36:11.312 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:36:11 compute-0 nova_compute[240062]: 2026-01-31 08:36:11.312 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:11 compute-0 ceph-mon[75294]: pgmap v1321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:13 compute-0 ceph-mon[75294]: pgmap v1322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:14 compute-0 nova_compute[240062]: 2026-01-31 08:36:14.797 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:14 compute-0 nova_compute[240062]: 2026-01-31 08:36:14.798 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:14 compute-0 nova_compute[240062]: 2026-01-31 08:36:14.798 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:14 compute-0 nova_compute[240062]: 2026-01-31 08:36:14.798 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:36:14 compute-0 nova_compute[240062]: 2026-01-31 08:36:14.798 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:36:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:36:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4027463564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:15 compute-0 nova_compute[240062]: 2026-01-31 08:36:15.305 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:36:15 compute-0 nova_compute[240062]: 2026-01-31 08:36:15.439 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:36:15 compute-0 nova_compute[240062]: 2026-01-31 08:36:15.441 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:36:15 compute-0 nova_compute[240062]: 2026-01-31 08:36:15.441 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:15 compute-0 nova_compute[240062]: 2026-01-31 08:36:15.441 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:15 compute-0 ceph-mon[75294]: pgmap v1323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:15 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4027463564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:17 compute-0 podman[255491]: 2026-01-31 08:36:17.193740241 +0000 UTC m=+0.065811436 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:36:17 compute-0 ceph-mon[75294]: pgmap v1324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.458 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.459 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.541 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing inventories for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.621 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating ProviderTree inventory for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.621 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.638 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing aggregate associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.773 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing trait associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:36:18 compute-0 ceph-mon[75294]: pgmap v1325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:18 compute-0 nova_compute[240062]: 2026-01-31 08:36:18.891 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:36:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:36:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3234930279' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:19 compute-0 nova_compute[240062]: 2026-01-31 08:36:19.394 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:36:19 compute-0 nova_compute[240062]: 2026-01-31 08:36:19.398 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:36:19 compute-0 nova_compute[240062]: 2026-01-31 08:36:19.767 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:36:19 compute-0 nova_compute[240062]: 2026-01-31 08:36:19.769 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:36:19 compute-0 nova_compute[240062]: 2026-01-31 08:36:19.769 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:19 compute-0 nova_compute[240062]: 2026-01-31 08:36:19.770 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:19 compute-0 nova_compute[240062]: 2026-01-31 08:36:19.770 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:36:20 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3234930279' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:20 compute-0 nova_compute[240062]: 2026-01-31 08:36:20.999 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:36:20 compute-0 nova_compute[240062]: 2026-01-31 08:36:20.999 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:21 compute-0 nova_compute[240062]: 2026-01-31 08:36:20.999 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:36:21 compute-0 podman[255532]: 2026-01-31 08:36:21.195499094 +0000 UTC m=+0.063775202 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:36:21 compute-0 ceph-mon[75294]: pgmap v1326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:23 compute-0 ceph-mon[75294]: pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:23 compute-0 nova_compute[240062]: 2026-01-31 08:36:23.986 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:25 compute-0 ceph-mon[75294]: pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:27 compute-0 nova_compute[240062]: 2026-01-31 08:36:27.021 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:27 compute-0 nova_compute[240062]: 2026-01-31 08:36:27.021 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:27 compute-0 ceph-mon[75294]: pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:28 compute-0 nova_compute[240062]: 2026-01-31 08:36:28.046 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:28 compute-0 nova_compute[240062]: 2026-01-31 08:36:28.046 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:28 compute-0 nova_compute[240062]: 2026-01-31 08:36:28.046 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:36:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 5934 writes, 26K keys, 5934 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5934 writes, 5934 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1313 writes, 5963 keys, 1313 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s
                                           Interval WAL: 1313 writes, 1313 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.9      1.42              0.06        15    0.095       0      0       0.0       0.0
                                             L6      1/0    7.59 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     46.6     38.2      2.80              0.20        14    0.200     65K   7843       0.0       0.0
                                            Sum      1/0    7.59 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     30.9     32.7      4.21              0.26        29    0.145     65K   7843       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     16.2     16.3      2.47              0.07         8    0.309     21K   2549       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     46.6     38.2      2.80              0.20        14    0.200     65K   7843       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.8      1.36              0.06        14    0.097       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.13 GB read, 0.05 MB/s read, 4.2 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 2.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cc8bf858d0#2 capacity: 304.00 MB usage: 14.04 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000121 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(873,13.51 MB,4.44554%) FilterBlock(30,187.23 KB,0.0601467%) IndexBlock(30,350.78 KB,0.112684%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:36:29 compute-0 ceph-mon[75294]: pgmap v1330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:31 compute-0 ceph-mon[75294]: pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:33 compute-0 ceph-mon[75294]: pgmap v1332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:34 compute-0 sudo[255558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:34 compute-0 sudo[255558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:34 compute-0 sudo[255558]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:34 compute-0 sudo[255583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:36:34 compute-0 sudo[255583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:34 compute-0 sudo[255583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:36:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:36:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:36:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:36:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:36:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:36:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:36:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:36:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:36:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:36:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:34 compute-0 sudo[255638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:34 compute-0 sudo[255638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:34 compute-0 sudo[255638]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:34 compute-0 sudo[255663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:36:34 compute-0 sudo[255663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:35 compute-0 podman[255700]: 2026-01-31 08:36:35.101938886 +0000 UTC m=+0.019850301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:35 compute-0 podman[255700]: 2026-01-31 08:36:35.310085363 +0000 UTC m=+0.227996758 container create ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:35 compute-0 systemd[1]: Started libpod-conmon-ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f.scope.
Jan 31 08:36:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:35 compute-0 ceph-mon[75294]: pgmap v1333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:36:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:36:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:36:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:36:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:35 compute-0 podman[255700]: 2026-01-31 08:36:35.760743731 +0000 UTC m=+0.678655156 container init ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:35 compute-0 podman[255700]: 2026-01-31 08:36:35.765514241 +0000 UTC m=+0.683425636 container start ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:36:35 compute-0 adoring_goldstine[255716]: 167 167
Jan 31 08:36:35 compute-0 systemd[1]: libpod-ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f.scope: Deactivated successfully.
Jan 31 08:36:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:36 compute-0 podman[255700]: 2026-01-31 08:36:36.080711223 +0000 UTC m=+0.998622648 container attach ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:36:36 compute-0 podman[255700]: 2026-01-31 08:36:36.081218887 +0000 UTC m=+0.999130282 container died ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:36:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8f492b6e68bf92ad4c062f4f6423d04a6f90c1abbe76bb7265b1bb3c46af6d7-merged.mount: Deactivated successfully.
Jan 31 08:36:37 compute-0 ceph-mon[75294]: pgmap v1334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:37 compute-0 podman[255700]: 2026-01-31 08:36:37.822627998 +0000 UTC m=+2.740539393 container remove ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:36:37 compute-0 systemd[1]: libpod-conmon-ae3f238d972fab3ed49fbc38e4d46f5695188e655d343ddce5a27cef0c2c944f.scope: Deactivated successfully.
Jan 31 08:36:38 compute-0 podman[255739]: 2026-01-31 08:36:37.919740943 +0000 UTC m=+0.020705253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:38 compute-0 podman[255739]: 2026-01-31 08:36:38.052897115 +0000 UTC m=+0.153861405 container create 20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:38 compute-0 systemd[1]: Started libpod-conmon-20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e.scope.
Jan 31 08:36:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fb8cc55e53d7b090b9df13a5a6cd32894dfbe64781816b57f3620595c0f18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fb8cc55e53d7b090b9df13a5a6cd32894dfbe64781816b57f3620595c0f18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fb8cc55e53d7b090b9df13a5a6cd32894dfbe64781816b57f3620595c0f18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fb8cc55e53d7b090b9df13a5a6cd32894dfbe64781816b57f3620595c0f18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101fb8cc55e53d7b090b9df13a5a6cd32894dfbe64781816b57f3620595c0f18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:38 compute-0 podman[255739]: 2026-01-31 08:36:38.44483121 +0000 UTC m=+0.545795530 container init 20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:36:38 compute-0 podman[255739]: 2026-01-31 08:36:38.451344238 +0000 UTC m=+0.552308528 container start 20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 31 08:36:38 compute-0 podman[255739]: 2026-01-31 08:36:38.651528839 +0000 UTC m=+0.752493159 container attach 20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:36:38 compute-0 loving_booth[255756]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:36:38 compute-0 loving_booth[255756]: --> All data devices are unavailable
Jan 31 08:36:38 compute-0 systemd[1]: libpod-20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e.scope: Deactivated successfully.
Jan 31 08:36:38 compute-0 podman[255739]: 2026-01-31 08:36:38.867996072 +0000 UTC m=+0.968960362 container died 20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-101fb8cc55e53d7b090b9df13a5a6cd32894dfbe64781816b57f3620595c0f18-merged.mount: Deactivated successfully.
Jan 31 08:36:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:36:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989863137' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:36:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:36:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989863137' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:36:39 compute-0 podman[255739]: 2026-01-31 08:36:39.333119653 +0000 UTC m=+1.434083943 container remove 20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:36:39 compute-0 systemd[1]: libpod-conmon-20486bb6d83ce7c299107471f953c5402772c5d3f7981ba582c8f8a28a86f36e.scope: Deactivated successfully.
Jan 31 08:36:39 compute-0 sudo[255663]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:39 compute-0 sudo[255792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:39 compute-0 sudo[255792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:39 compute-0 sudo[255792]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:39 compute-0 sudo[255817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:36:39 compute-0 sudo[255817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:39 compute-0 ceph-mon[75294]: pgmap v1335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/989863137' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:36:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/989863137' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:36:39 compute-0 podman[255853]: 2026-01-31 08:36:39.758894806 +0000 UTC m=+0.018013190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:39 compute-0 podman[255853]: 2026-01-31 08:36:39.869361663 +0000 UTC m=+0.128480027 container create ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:36:39 compute-0 systemd[1]: Started libpod-conmon-ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330.scope.
Jan 31 08:36:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:39 compute-0 podman[255853]: 2026-01-31 08:36:39.998801525 +0000 UTC m=+0.257919919 container init ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:40 compute-0 podman[255853]: 2026-01-31 08:36:40.004862769 +0000 UTC m=+0.263981133 container start ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:36:40 compute-0 adoring_khayyam[255869]: 167 167
Jan 31 08:36:40 compute-0 systemd[1]: libpod-ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330.scope: Deactivated successfully.
Jan 31 08:36:40 compute-0 podman[255853]: 2026-01-31 08:36:40.116278633 +0000 UTC m=+0.375396997 container attach ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:36:40 compute-0 podman[255853]: 2026-01-31 08:36:40.117011852 +0000 UTC m=+0.376130216 container died ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:36:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a0ff7343849646f6c052008648f953bc71c9c93740ef76431cdc7314447d530-merged.mount: Deactivated successfully.
Jan 31 08:36:40 compute-0 podman[255853]: 2026-01-31 08:36:40.894405747 +0000 UTC m=+1.153524111 container remove ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:36:40 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:40 compute-0 systemd[1]: libpod-conmon-ea75589aa36bc980ec81eb64dd4b98979a35399ea56751ac6885b39002f8e330.scope: Deactivated successfully.
Jan 31 08:36:41 compute-0 podman[255893]: 2026-01-31 08:36:40.991692406 +0000 UTC m=+0.021517275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:41 compute-0 ceph-mon[75294]: pgmap v1336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:41 compute-0 podman[255893]: 2026-01-31 08:36:41.938014093 +0000 UTC m=+0.967838942 container create c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:42 compute-0 systemd[1]: Started libpod-conmon-c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90.scope.
Jan 31 08:36:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b99f45b67951c7d30f46d6ae2f48811ee1c439f8031d28bdea5a420be2e90fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b99f45b67951c7d30f46d6ae2f48811ee1c439f8031d28bdea5a420be2e90fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b99f45b67951c7d30f46d6ae2f48811ee1c439f8031d28bdea5a420be2e90fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b99f45b67951c7d30f46d6ae2f48811ee1c439f8031d28bdea5a420be2e90fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:42 compute-0 podman[255893]: 2026-01-31 08:36:42.27807083 +0000 UTC m=+1.307895729 container init c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:36:42 compute-0 podman[255893]: 2026-01-31 08:36:42.285557913 +0000 UTC m=+1.315382772 container start c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:42 compute-0 podman[255893]: 2026-01-31 08:36:42.341485781 +0000 UTC m=+1.371310630 container attach c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:36:42 compute-0 blissful_rubin[255909]: {
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:     "0": [
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:         {
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "devices": [
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "/dev/loop3"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             ],
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_name": "ceph_lv0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_size": "21470642176",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "name": "ceph_lv0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "tags": {
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cluster_name": "ceph",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.crush_device_class": "",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.encrypted": "0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.objectstore": "bluestore",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osd_id": "0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.type": "block",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.vdo": "0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.with_tpm": "0"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             },
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "type": "block",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "vg_name": "ceph_vg0"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:         }
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:     ],
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:     "1": [
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:         {
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "devices": [
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "/dev/loop4"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             ],
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_name": "ceph_lv1",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_size": "21470642176",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "name": "ceph_lv1",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "tags": {
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cluster_name": "ceph",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.crush_device_class": "",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.encrypted": "0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.objectstore": "bluestore",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osd_id": "1",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.type": "block",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.vdo": "0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.with_tpm": "0"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             },
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "type": "block",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "vg_name": "ceph_vg1"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:         }
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:     ],
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:     "2": [
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:         {
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "devices": [
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "/dev/loop5"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             ],
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_name": "ceph_lv2",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_size": "21470642176",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "name": "ceph_lv2",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "tags": {
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.cluster_name": "ceph",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.crush_device_class": "",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.encrypted": "0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.objectstore": "bluestore",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osd_id": "2",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.type": "block",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.vdo": "0",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:                 "ceph.with_tpm": "0"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             },
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "type": "block",
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:             "vg_name": "ceph_vg2"
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:         }
Jan 31 08:36:42 compute-0 blissful_rubin[255909]:     ]
Jan 31 08:36:42 compute-0 blissful_rubin[255909]: }
Jan 31 08:36:42 compute-0 systemd[1]: libpod-c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90.scope: Deactivated successfully.
Jan 31 08:36:42 compute-0 podman[255893]: 2026-01-31 08:36:42.570424163 +0000 UTC m=+1.600249012 container died c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b99f45b67951c7d30f46d6ae2f48811ee1c439f8031d28bdea5a420be2e90fe-merged.mount: Deactivated successfully.
Jan 31 08:36:43 compute-0 podman[255893]: 2026-01-31 08:36:43.117529388 +0000 UTC m=+2.147354237 container remove c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:43 compute-0 systemd[1]: libpod-conmon-c11c9bd9729c32c85403bfa087cee021c13c567145fd2e36ea300b49fc20bc90.scope: Deactivated successfully.
Jan 31 08:36:43 compute-0 sudo[255817]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:43 compute-0 sudo[255930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:43 compute-0 sudo[255930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:43 compute-0 sudo[255930]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:43 compute-0 sudo[255955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:36:43 compute-0 sudo[255955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:43 compute-0 ceph-mon[75294]: pgmap v1337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:43 compute-0 podman[255990]: 2026-01-31 08:36:43.525515208 +0000 UTC m=+0.023405436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:43 compute-0 podman[255990]: 2026-01-31 08:36:43.63652525 +0000 UTC m=+0.134415458 container create 6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:43 compute-0 systemd[1]: Started libpod-conmon-6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc.scope.
Jan 31 08:36:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:44 compute-0 podman[255990]: 2026-01-31 08:36:44.055327483 +0000 UTC m=+0.553217721 container init 6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:36:44 compute-0 podman[255990]: 2026-01-31 08:36:44.060448253 +0000 UTC m=+0.558338461 container start 6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:36:44 compute-0 busy_cohen[256006]: 167 167
Jan 31 08:36:44 compute-0 systemd[1]: libpod-6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc.scope: Deactivated successfully.
Jan 31 08:36:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:44 compute-0 podman[255990]: 2026-01-31 08:36:44.186918034 +0000 UTC m=+0.684808352 container attach 6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:44 compute-0 podman[255990]: 2026-01-31 08:36:44.187568652 +0000 UTC m=+0.685458870 container died 6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-008e512b6eabf65d48f8025b4a517d719d2f4487517999c2c539afb1c39bbae9-merged.mount: Deactivated successfully.
Jan 31 08:36:45 compute-0 ceph-mon[75294]: pgmap v1338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:45 compute-0 podman[255990]: 2026-01-31 08:36:45.385935988 +0000 UTC m=+1.883826196 container remove 6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_cohen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:36:45 compute-0 systemd[1]: libpod-conmon-6951f166467a34554346cf36b4a980d6ce4db060e420046fbe3dede85820b7dc.scope: Deactivated successfully.
Jan 31 08:36:45 compute-0 podman[256029]: 2026-01-31 08:36:45.571396219 +0000 UTC m=+0.094301749 container create fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cray, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:45 compute-0 podman[256029]: 2026-01-31 08:36:45.499381196 +0000 UTC m=+0.022286756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:45 compute-0 systemd[1]: Started libpod-conmon-fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721.scope.
Jan 31 08:36:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e834a602d36a188a98bbe1ce820ca30cf96c7d771d920ca6e6e0b0d824e5557/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e834a602d36a188a98bbe1ce820ca30cf96c7d771d920ca6e6e0b0d824e5557/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e834a602d36a188a98bbe1ce820ca30cf96c7d771d920ca6e6e0b0d824e5557/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e834a602d36a188a98bbe1ce820ca30cf96c7d771d920ca6e6e0b0d824e5557/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:46 compute-0 podman[256029]: 2026-01-31 08:36:46.036393067 +0000 UTC m=+0.559298617 container init fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:36:46 compute-0 podman[256029]: 2026-01-31 08:36:46.041474425 +0000 UTC m=+0.564379955 container start fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cray, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:46 compute-0 podman[256029]: 2026-01-31 08:36:46.143494553 +0000 UTC m=+0.666400083 container attach fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:46 compute-0 lvm[256125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:36:46 compute-0 lvm[256126]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:36:46 compute-0 lvm[256126]: VG ceph_vg1 finished
Jan 31 08:36:46 compute-0 lvm[256125]: VG ceph_vg0 finished
Jan 31 08:36:46 compute-0 lvm[256128]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:36:46 compute-0 lvm[256128]: VG ceph_vg2 finished
Jan 31 08:36:46 compute-0 nifty_cray[256045]: {}
Jan 31 08:36:46 compute-0 systemd[1]: libpod-fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721.scope: Deactivated successfully.
Jan 31 08:36:46 compute-0 podman[256029]: 2026-01-31 08:36:46.735103225 +0000 UTC m=+1.258008775 container died fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:36:46.981 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:36:46.982 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:36:46.982 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e834a602d36a188a98bbe1ce820ca30cf96c7d771d920ca6e6e0b0d824e5557-merged.mount: Deactivated successfully.
Jan 31 08:36:47 compute-0 podman[256029]: 2026-01-31 08:36:47.577876243 +0000 UTC m=+2.100781813 container remove fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:47 compute-0 systemd[1]: libpod-conmon-fc2933d2f5e118359e998b7154e6880f296709cf84d55a1190aa36bb9b3bf721.scope: Deactivated successfully.
Jan 31 08:36:47 compute-0 sudo[255955]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:36:47 compute-0 ceph-mon[75294]: pgmap v1339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:47 compute-0 podman[256142]: 2026-01-31 08:36:47.661290736 +0000 UTC m=+0.229305483 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:36:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:36:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:36:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:36:48 compute-0 sudo[256162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:36:48 compute-0 sudo[256162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:48 compute-0 sudo[256162]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:49 compute-0 ceph-mon[75294]: pgmap v1340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:49 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:36:49 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:36:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:36:50
Jan 31 08:36:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:36:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:36:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log']
Jan 31 08:36:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:36:50 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:51 compute-0 ceph-mon[75294]: pgmap v1341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:52 compute-0 podman[256187]: 2026-01-31 08:36:52.193761697 +0000 UTC m=+0.066580818 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 31 08:36:53 compute-0 ceph-mon[75294]: pgmap v1342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:36:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:36:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:55 compute-0 ceph-mon[75294]: pgmap v1343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:57 compute-0 ceph-mon[75294]: pgmap v1344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:59 compute-0 ceph-mon[75294]: pgmap v1345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:01 compute-0 ceph-mon[75294]: pgmap v1346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:03 compute-0 ceph-mon[75294]: pgmap v1347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:05 compute-0 ceph-mon[75294]: pgmap v1348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:06 compute-0 nova_compute[240062]: 2026-01-31 08:37:06.426 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:37:07 compute-0 ceph-mon[75294]: pgmap v1349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:08 compute-0 ceph-mon[75294]: pgmap v1350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:09 compute-0 nova_compute[240062]: 2026-01-31 08:37:09.245 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:10 compute-0 nova_compute[240062]: 2026-01-31 08:37:10.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:10 compute-0 nova_compute[240062]: 2026-01-31 08:37:10.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:37:10 compute-0 nova_compute[240062]: 2026-01-31 08:37:10.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:37:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:10 compute-0 nova_compute[240062]: 2026-01-31 08:37:10.191 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:37:10 compute-0 nova_compute[240062]: 2026-01-31 08:37:10.191 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:10 compute-0 nova_compute[240062]: 2026-01-31 08:37:10.191 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:10 compute-0 nova_compute[240062]: 2026-01-31 08:37:10.191 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:37:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:11 compute-0 nova_compute[240062]: 2026-01-31 08:37:11.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:11 compute-0 nova_compute[240062]: 2026-01-31 08:37:11.187 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:11 compute-0 nova_compute[240062]: 2026-01-31 08:37:11.187 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:11 compute-0 nova_compute[240062]: 2026-01-31 08:37:11.187 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:11 compute-0 nova_compute[240062]: 2026-01-31 08:37:11.188 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:37:11 compute-0 nova_compute[240062]: 2026-01-31 08:37:11.188 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:37:11 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420206447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.016 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.828s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:12 compute-0 ceph-mon[75294]: pgmap v1351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.141 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.142 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.143 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.143 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.565 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.566 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:37:12 compute-0 nova_compute[240062]: 2026-01-31 08:37:12.584 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:37:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835213049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:13 compute-0 nova_compute[240062]: 2026-01-31 08:37:13.079 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:13 compute-0 nova_compute[240062]: 2026-01-31 08:37:13.083 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:37:13 compute-0 nova_compute[240062]: 2026-01-31 08:37:13.108 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:37:13 compute-0 nova_compute[240062]: 2026-01-31 08:37:13.109 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:37:13 compute-0 nova_compute[240062]: 2026-01-31 08:37:13.109 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:13 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1420206447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:13 compute-0 ceph-mon[75294]: pgmap v1352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:13 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2835213049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:15 compute-0 ceph-mon[75294]: pgmap v1353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:16 compute-0 nova_compute[240062]: 2026-01-31 08:37:16.104 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:16 compute-0 nova_compute[240062]: 2026-01-31 08:37:16.105 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:16 compute-0 nova_compute[240062]: 2026-01-31 08:37:16.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:17 compute-0 nova_compute[240062]: 2026-01-31 08:37:17.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:17 compute-0 ceph-mon[75294]: pgmap v1354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:18 compute-0 podman[256257]: 2026-01-31 08:37:18.164213703 +0000 UTC m=+0.041216639 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 08:37:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:19 compute-0 ceph-mon[75294]: pgmap v1355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:21 compute-0 ceph-mon[75294]: pgmap v1356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:23 compute-0 podman[256278]: 2026-01-31 08:37:23.199379204 +0000 UTC m=+0.072482427 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:37:23 compute-0 ceph-mon[75294]: pgmap v1357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:25 compute-0 ceph-mon[75294]: pgmap v1358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:25 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:27 compute-0 ceph-mon[75294]: pgmap v1359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:29 compute-0 ceph-mon[75294]: pgmap v1360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:31 compute-0 ceph-mon[75294]: pgmap v1361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:33 compute-0 ceph-mon[75294]: pgmap v1362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:35 compute-0 ceph-mon[75294]: pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:35 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:37 compute-0 ceph-mon[75294]: pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:37:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3672797249' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:37:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:37:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3672797249' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:37:39 compute-0 ceph-mon[75294]: pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3672797249' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:37:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3672797249' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:37:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:41 compute-0 ceph-mon[75294]: pgmap v1366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:43 compute-0 ceph-mon[75294]: pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:45 compute-0 ceph-mon[75294]: pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:37:46.982 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:37:46.982 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:37:46.982 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:47 compute-0 ceph-mon[75294]: pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:48 compute-0 sudo[256304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:48 compute-0 sudo[256304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:48 compute-0 sudo[256304]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:48 compute-0 sudo[256335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:37:48 compute-0 sudo[256335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:48 compute-0 podman[256328]: 2026-01-31 08:37:48.541506183 +0000 UTC m=+0.065347544 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 08:37:48 compute-0 sudo[256335]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:37:48 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:37:48 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:37:48 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:37:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:37:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:37:49 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:37:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:37:49 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:37:49 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:37:49 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:49 compute-0 ceph-mon[75294]: pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:49 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:49 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:37:49 compute-0 sudo[256403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:49 compute-0 sudo[256403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:49 compute-0 sudo[256403]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:49 compute-0 sudo[256428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:37:49 compute-0 sudo[256428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:49 compute-0 podman[256464]: 2026-01-31 08:37:49.860378759 +0000 UTC m=+0.021225667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:50 compute-0 podman[256464]: 2026-01-31 08:37:50.189735645 +0000 UTC m=+0.350582533 container create e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:37:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:50 compute-0 systemd[1]: Started libpod-conmon-e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2.scope.
Jan 31 08:37:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:50 compute-0 podman[256464]: 2026-01-31 08:37:50.799919882 +0000 UTC m=+0.960766800 container init e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_euler, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:37:50 compute-0 podman[256464]: 2026-01-31 08:37:50.807280911 +0000 UTC m=+0.968127789 container start e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_euler, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:37:50 compute-0 strange_euler[256480]: 167 167
Jan 31 08:37:50 compute-0 systemd[1]: libpod-e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2.scope: Deactivated successfully.
Jan 31 08:37:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:37:50
Jan 31 08:37:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:37:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:37:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.mgr', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'vms', 'default.rgw.log', 'backups']
Jan 31 08:37:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:37:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:51 compute-0 podman[256464]: 2026-01-31 08:37:51.196842932 +0000 UTC m=+1.357689820 container attach e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:51 compute-0 podman[256464]: 2026-01-31 08:37:51.197603382 +0000 UTC m=+1.358450290 container died e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:37:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:37:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:37:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:37:51 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-30cb75701e59e1d820db38e3b88dc911534491c0a98f877096041ed714e1ac1c-merged.mount: Deactivated successfully.
Jan 31 08:37:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:52 compute-0 podman[256464]: 2026-01-31 08:37:52.398197668 +0000 UTC m=+2.559044546 container remove e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:37:52 compute-0 systemd[1]: libpod-conmon-e262e0f1f8e2c9fc69569f3e6429e9bfe468cf490554614f9a26d483edb179a2.scope: Deactivated successfully.
Jan 31 08:37:52 compute-0 podman[256504]: 2026-01-31 08:37:52.494769789 +0000 UTC m=+0.022149092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:52 compute-0 podman[256504]: 2026-01-31 08:37:52.692790361 +0000 UTC m=+0.220169634 container create fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaplygin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:37:53 compute-0 ceph-mon[75294]: pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:53 compute-0 systemd[1]: Started libpod-conmon-fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a.scope.
Jan 31 08:37:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edeeca3bb86f72d17c51b8bb66c5478c50e34c72003418c7f817c143765874ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edeeca3bb86f72d17c51b8bb66c5478c50e34c72003418c7f817c143765874ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edeeca3bb86f72d17c51b8bb66c5478c50e34c72003418c7f817c143765874ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edeeca3bb86f72d17c51b8bb66c5478c50e34c72003418c7f817c143765874ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edeeca3bb86f72d17c51b8bb66c5478c50e34c72003418c7f817c143765874ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:53 compute-0 podman[256504]: 2026-01-31 08:37:53.326767414 +0000 UTC m=+0.854146697 container init fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaplygin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:37:53 compute-0 podman[256504]: 2026-01-31 08:37:53.33290581 +0000 UTC m=+0.860285083 container start fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:53 compute-0 podman[256504]: 2026-01-31 08:37:53.67786234 +0000 UTC m=+1.205241633 container attach fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:37:53 compute-0 adoring_chaplygin[256520]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:37:53 compute-0 adoring_chaplygin[256520]: --> All data devices are unavailable
Jan 31 08:37:53 compute-0 systemd[1]: libpod-fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a.scope: Deactivated successfully.
Jan 31 08:37:53 compute-0 podman[256504]: 2026-01-31 08:37:53.712710005 +0000 UTC m=+1.240089328 container died fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaplygin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:37:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:54 compute-0 ceph-mon[75294]: pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-edeeca3bb86f72d17c51b8bb66c5478c50e34c72003418c7f817c143765874ae-merged.mount: Deactivated successfully.
Jan 31 08:37:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:55 compute-0 podman[256504]: 2026-01-31 08:37:55.718774147 +0000 UTC m=+3.246153420 container remove fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:37:55 compute-0 sudo[256428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:37:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:37:55 compute-0 systemd[1]: libpod-conmon-fa16e96f4a9efac5a745f3a0c03d6783e658e92e0205b5715075f5594b9c8e6a.scope: Deactivated successfully.
Jan 31 08:37:55 compute-0 sudo[256564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:55 compute-0 ceph-mon[75294]: pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:55 compute-0 sudo[256564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:55 compute-0 sudo[256564]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:55 compute-0 podman[256541]: 2026-01-31 08:37:55.839887234 +0000 UTC m=+2.097008001 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:37:55 compute-0 sudo[256598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:37:55 compute-0 sudo[256598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:56 compute-0 podman[256641]: 2026-01-31 08:37:56.092620521 +0000 UTC m=+0.017159056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:56 compute-0 podman[256641]: 2026-01-31 08:37:56.365195838 +0000 UTC m=+0.289734353 container create 252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:37:56 compute-0 systemd[1]: Started libpod-conmon-252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e.scope.
Jan 31 08:37:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:56 compute-0 podman[256641]: 2026-01-31 08:37:56.904493331 +0000 UTC m=+0.829031866 container init 252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_rubin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:37:56 compute-0 podman[256641]: 2026-01-31 08:37:56.910524254 +0000 UTC m=+0.835062769 container start 252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:37:56 compute-0 great_rubin[256658]: 167 167
Jan 31 08:37:56 compute-0 systemd[1]: libpod-252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e.scope: Deactivated successfully.
Jan 31 08:37:57 compute-0 podman[256641]: 2026-01-31 08:37:57.324630741 +0000 UTC m=+1.249169276 container attach 252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:37:57 compute-0 podman[256641]: 2026-01-31 08:37:57.325057392 +0000 UTC m=+1.249595907 container died 252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_rubin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:37:57 compute-0 ceph-mon[75294]: pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-faf598560b9ac8efd11c5bfeb2cafb74c1e5df2a8c4b37524edb4300f2a5709a-merged.mount: Deactivated successfully.
Jan 31 08:37:59 compute-0 ceph-mon[75294]: pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:59 compute-0 podman[256641]: 2026-01-31 08:37:59.997606768 +0000 UTC m=+3.922145283 container remove 252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:38:00 compute-0 systemd[1]: libpod-conmon-252222e8b397dd61d5148d4c7050bf67ba987b6c3798b6e4af49e07e8020aa7e.scope: Deactivated successfully.
Jan 31 08:38:00 compute-0 podman[256683]: 2026-01-31 08:38:00.105984089 +0000 UTC m=+0.022239404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:00 compute-0 podman[256683]: 2026-01-31 08:38:00.223360523 +0000 UTC m=+0.139615808 container create 90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 31 08:38:00 compute-0 systemd[1]: Started libpod-conmon-90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f.scope.
Jan 31 08:38:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa6adc774f6dbc5685ea3802554825e14ea7398d55885b0d8a2c59e5c47aa56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa6adc774f6dbc5685ea3802554825e14ea7398d55885b0d8a2c59e5c47aa56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa6adc774f6dbc5685ea3802554825e14ea7398d55885b0d8a2c59e5c47aa56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa6adc774f6dbc5685ea3802554825e14ea7398d55885b0d8a2c59e5c47aa56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:00 compute-0 podman[256683]: 2026-01-31 08:38:00.738020288 +0000 UTC m=+0.654275603 container init 90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:38:00 compute-0 podman[256683]: 2026-01-31 08:38:00.743052665 +0000 UTC m=+0.659307950 container start 90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:38:00 compute-0 exciting_hellman[256700]: {
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:     "0": [
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:         {
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "devices": [
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "/dev/loop3"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             ],
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_name": "ceph_lv0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_size": "21470642176",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "name": "ceph_lv0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "tags": {
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cluster_name": "ceph",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.crush_device_class": "",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.encrypted": "0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.objectstore": "bluestore",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osd_id": "0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.type": "block",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.vdo": "0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.with_tpm": "0"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             },
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "type": "block",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "vg_name": "ceph_vg0"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:         }
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:     ],
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:     "1": [
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:         {
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "devices": [
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "/dev/loop4"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             ],
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_name": "ceph_lv1",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_size": "21470642176",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "name": "ceph_lv1",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "tags": {
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cluster_name": "ceph",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.crush_device_class": "",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.encrypted": "0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.objectstore": "bluestore",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osd_id": "1",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.type": "block",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.vdo": "0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.with_tpm": "0"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             },
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "type": "block",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "vg_name": "ceph_vg1"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:         }
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:     ],
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:     "2": [
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:         {
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "devices": [
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "/dev/loop5"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             ],
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_name": "ceph_lv2",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_size": "21470642176",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "name": "ceph_lv2",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "tags": {
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.cluster_name": "ceph",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.crush_device_class": "",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.encrypted": "0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.objectstore": "bluestore",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osd_id": "2",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.type": "block",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.vdo": "0",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:                 "ceph.with_tpm": "0"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             },
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "type": "block",
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:             "vg_name": "ceph_vg2"
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:         }
Jan 31 08:38:00 compute-0 exciting_hellman[256700]:     ]
Jan 31 08:38:00 compute-0 exciting_hellman[256700]: }
Jan 31 08:38:01 compute-0 systemd[1]: libpod-90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f.scope: Deactivated successfully.
Jan 31 08:38:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:01 compute-0 podman[256683]: 2026-01-31 08:38:01.26219241 +0000 UTC m=+1.178447705 container attach 90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:38:01 compute-0 podman[256683]: 2026-01-31 08:38:01.263020753 +0000 UTC m=+1.179276048 container died 90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:38:01 compute-0 ceph-mon[75294]: pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aa6adc774f6dbc5685ea3802554825e14ea7398d55885b0d8a2c59e5c47aa56-merged.mount: Deactivated successfully.
Jan 31 08:38:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:02 compute-0 podman[256683]: 2026-01-31 08:38:02.88752007 +0000 UTC m=+2.803775365 container remove 90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:38:02 compute-0 sudo[256598]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:02 compute-0 systemd[1]: libpod-conmon-90375f6a6cc8fd9a0df790c7dbe10a479f671e19966bd946f31ed15cf849bc4f.scope: Deactivated successfully.
Jan 31 08:38:02 compute-0 sudo[256722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:02 compute-0 sudo[256722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:02 compute-0 sudo[256722]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:03 compute-0 sudo[256747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:38:03 compute-0 sudo[256747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:03 compute-0 ceph-mon[75294]: pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:03 compute-0 podman[256784]: 2026-01-31 08:38:03.287227736 +0000 UTC m=+0.019124530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:03 compute-0 podman[256784]: 2026-01-31 08:38:03.61945742 +0000 UTC m=+0.351354194 container create 8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_saha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:38:03 compute-0 systemd[1]: Started libpod-conmon-8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0.scope.
Jan 31 08:38:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:04 compute-0 podman[256784]: 2026-01-31 08:38:04.214269268 +0000 UTC m=+0.946166072 container init 8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:38:04 compute-0 podman[256784]: 2026-01-31 08:38:04.220723237 +0000 UTC m=+0.952620011 container start 8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_saha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:38:04 compute-0 funny_saha[256800]: 167 167
Jan 31 08:38:04 compute-0 systemd[1]: libpod-8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0.scope: Deactivated successfully.
Jan 31 08:38:04 compute-0 podman[256784]: 2026-01-31 08:38:04.421570954 +0000 UTC m=+1.153467728 container attach 8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:38:04 compute-0 podman[256784]: 2026-01-31 08:38:04.422150998 +0000 UTC m=+1.154047782 container died 8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9a9c7f94d04a064322d9c864d122136990000ec06913a05ec7208264fcbc42-merged.mount: Deactivated successfully.
Jan 31 08:38:05 compute-0 ceph-mon[75294]: pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:05 compute-0 podman[256784]: 2026-01-31 08:38:05.760837192 +0000 UTC m=+2.492733966 container remove 8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_saha, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:38:05 compute-0 systemd[1]: libpod-conmon-8cc44e321b7688e5a931476db066097e4be73c1a539b0c29c56e4e925eb905e0.scope: Deactivated successfully.
Jan 31 08:38:05 compute-0 podman[256824]: 2026-01-31 08:38:05.86325998 +0000 UTC m=+0.020392576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:06 compute-0 podman[256824]: 2026-01-31 08:38:06.419439413 +0000 UTC m=+0.576571989 container create e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_greider, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:38:06 compute-0 systemd[1]: Started libpod-conmon-e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa.scope.
Jan 31 08:38:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c736c27556d4a99e05129b13f6af69e35017ec6df37e8f46bf29305c7cb98621/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c736c27556d4a99e05129b13f6af69e35017ec6df37e8f46bf29305c7cb98621/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c736c27556d4a99e05129b13f6af69e35017ec6df37e8f46bf29305c7cb98621/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c736c27556d4a99e05129b13f6af69e35017ec6df37e8f46bf29305c7cb98621/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:07 compute-0 podman[256824]: 2026-01-31 08:38:07.127982751 +0000 UTC m=+1.285115347 container init e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_greider, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:38:07 compute-0 podman[256824]: 2026-01-31 08:38:07.133968059 +0000 UTC m=+1.291100635 container start e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:38:07 compute-0 ceph-mon[75294]: pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:07 compute-0 podman[256824]: 2026-01-31 08:38:07.313550519 +0000 UTC m=+1.470683125 container attach e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_greider, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:38:07 compute-0 lvm[256919]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:38:07 compute-0 lvm[256916]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:38:07 compute-0 lvm[256916]: VG ceph_vg0 finished
Jan 31 08:38:07 compute-0 lvm[256919]: VG ceph_vg1 finished
Jan 31 08:38:07 compute-0 lvm[256921]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:38:07 compute-0 lvm[256921]: VG ceph_vg2 finished
Jan 31 08:38:07 compute-0 kind_greider[256840]: {}
Jan 31 08:38:07 compute-0 systemd[1]: libpod-e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa.scope: Deactivated successfully.
Jan 31 08:38:07 compute-0 systemd[1]: libpod-e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa.scope: Consumed 1.013s CPU time.
Jan 31 08:38:07 compute-0 podman[256924]: 2026-01-31 08:38:07.881094404 +0000 UTC m=+0.025772090 container died e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_greider, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:38:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c736c27556d4a99e05129b13f6af69e35017ec6df37e8f46bf29305c7cb98621-merged.mount: Deactivated successfully.
Jan 31 08:38:09 compute-0 podman[256924]: 2026-01-31 08:38:09.408535575 +0000 UTC m=+1.553213241 container remove e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:09 compute-0 systemd[1]: libpod-conmon-e907db9324643fecf78564ab63640e4efb15b8ba106f50d2c025873cefd46afa.scope: Deactivated successfully.
Jan 31 08:38:09 compute-0 sudo[256747]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:38:09 compute-0 ceph-mon[75294]: pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:38:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:38:10 compute-0 nova_compute[240062]: 2026-01-31 08:38:10.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:38:10 compute-0 sudo[256939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:38:10 compute-0 sudo[256939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:10 compute-0 sudo[256939]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:11 compute-0 nova_compute[240062]: 2026-01-31 08:38:11.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:11 compute-0 nova_compute[240062]: 2026-01-31 08:38:11.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:38:11 compute-0 nova_compute[240062]: 2026-01-31 08:38:11.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:38:11 compute-0 nova_compute[240062]: 2026-01-31 08:38:11.196 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:38:11 compute-0 nova_compute[240062]: 2026-01-31 08:38:11.196 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:38:11 compute-0 ceph-mon[75294]: pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.154 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.277 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.278 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.278 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.278 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.278 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:38:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607376020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.861 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.985 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.986 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5111MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.987 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:12 compute-0 nova_compute[240062]: 2026-01-31 08:38:12.987 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:13 compute-0 nova_compute[240062]: 2026-01-31 08:38:13.234 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:38:13 compute-0 nova_compute[240062]: 2026-01-31 08:38:13.234 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:38:13 compute-0 nova_compute[240062]: 2026-01-31 08:38:13.248 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:38:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553624077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:13 compute-0 ceph-mon[75294]: pgmap v1382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:13 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3607376020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:13 compute-0 nova_compute[240062]: 2026-01-31 08:38:13.784 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:13 compute-0 nova_compute[240062]: 2026-01-31 08:38:13.789 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.872413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848693872451, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3559218, "memory_usage": 3617600, "flush_reason": "Manual Compaction"}
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848693976937, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3492568, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25797, "largest_seqno": 27848, "table_properties": {"data_size": 3483066, "index_size": 6060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18610, "raw_average_key_size": 20, "raw_value_size": 3464397, "raw_average_value_size": 3737, "num_data_blocks": 269, "num_entries": 927, "num_filter_entries": 927, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848458, "oldest_key_time": 1769848458, "file_creation_time": 1769848693, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 104586 microseconds, and 5354 cpu microseconds.
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.976995) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3492568 bytes OK
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.977016) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.997968) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.998019) EVENT_LOG_v1 {"time_micros": 1769848693998009, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.998045) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3550627, prev total WAL file size 3550627, number of live WAL files 2.
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.998957) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3410KB)], [59(7771KB)]
Jan 31 08:38:13 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848693999010, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11450887, "oldest_snapshot_seqno": -1}
Jan 31 08:38:14 compute-0 nova_compute[240062]: 2026-01-31 08:38:14.010 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:38:14 compute-0 nova_compute[240062]: 2026-01-31 08:38:14.012 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:38:14 compute-0 nova_compute[240062]: 2026-01-31 08:38:14.013 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5183 keys, 9672678 bytes, temperature: kUnknown
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848694130510, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9672678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9635938, "index_size": 22678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 128583, "raw_average_key_size": 24, "raw_value_size": 9540229, "raw_average_value_size": 1840, "num_data_blocks": 937, "num_entries": 5183, "num_filter_entries": 5183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769848693, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.131031) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9672678 bytes
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.158499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.9 rd, 73.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5697, records dropped: 514 output_compression: NoCompression
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.158541) EVENT_LOG_v1 {"time_micros": 1769848694158525, "job": 32, "event": "compaction_finished", "compaction_time_micros": 131759, "compaction_time_cpu_micros": 17775, "output_level": 6, "num_output_files": 1, "total_output_size": 9672678, "num_input_records": 5697, "num_output_records": 5183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848694159093, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848694160009, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:13.998856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.160117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.160124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.160127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.160129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:38:14 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:38:14.160131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:38:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:14 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2553624077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:15 compute-0 ceph-mon[75294]: pgmap v1383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:15 compute-0 sshd-session[257008]: Invalid user sol from 193.32.162.145 port 38132
Jan 31 08:38:15 compute-0 sshd-session[257008]: Connection closed by invalid user sol 193.32.162.145 port 38132 [preauth]
Jan 31 08:38:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:16 compute-0 ceph-mon[75294]: pgmap v1384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:17 compute-0 nova_compute[240062]: 2026-01-31 08:38:17.007 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:17 compute-0 nova_compute[240062]: 2026-01-31 08:38:17.008 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:18 compute-0 nova_compute[240062]: 2026-01-31 08:38:18.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:19 compute-0 nova_compute[240062]: 2026-01-31 08:38:19.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:19 compute-0 podman[257010]: 2026-01-31 08:38:19.174613873 +0000 UTC m=+0.046241297 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:38:19 compute-0 ceph-mon[75294]: pgmap v1385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:21 compute-0 ceph-mon[75294]: pgmap v1386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:38:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6549 writes, 26K keys, 6549 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6549 writes, 1308 syncs, 5.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 322 writes, 617 keys, 322 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 322 writes, 146 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:38:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:23 compute-0 ceph-mon[75294]: pgmap v1387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:25 compute-0 ceph-mon[75294]: pgmap v1388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:38:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 7919 writes, 31K keys, 7919 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7919 writes, 1754 syncs, 4.51 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 369 writes, 967 keys, 369 commit groups, 1.0 writes per commit group, ingest: 0.43 MB, 0.00 MB/s
                                           Interval WAL: 369 writes, 163 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:38:26 compute-0 nova_compute[240062]: 2026-01-31 08:38:26.149 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:26 compute-0 podman[257029]: 2026-01-31 08:38:26.201333877 +0000 UTC m=+0.063917694 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 08:38:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:27 compute-0 ceph-mon[75294]: pgmap v1389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:27 compute-0 sshd-session[257055]: Accepted publickey for zuul from 192.168.122.30 port 54182 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:38:27 compute-0 systemd-logind[810]: New session 52 of user zuul.
Jan 31 08:38:27 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 31 08:38:27 compute-0 sshd-session[257055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:38:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:29 compute-0 ceph-mon[75294]: pgmap v1390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:30 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:38:30.111 155810 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:b9:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:58:2f:a4:b2:e1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:38:30 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:38:30.111 155810 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:38:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:38:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 6771 writes, 26K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6771 writes, 1347 syncs, 5.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 498 writes, 1214 keys, 498 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 498 writes, 227 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:38:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:31 compute-0 ceph-mon[75294]: pgmap v1391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:33 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:38:33.113 155810 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41f56c18-6e96-48c3-b4a0-6aca47eb55b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:33 compute-0 sshd-session[257058]: Connection closed by 192.168.122.30 port 54182
Jan 31 08:38:33 compute-0 sshd-session[257055]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:38:33 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 31 08:38:33 compute-0 systemd-logind[810]: Session 52 logged out. Waiting for processes to exit.
Jan 31 08:38:33 compute-0 systemd-logind[810]: Removed session 52.
Jan 31 08:38:33 compute-0 ceph-mon[75294]: pgmap v1392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:35 compute-0 ceph-mon[75294]: pgmap v1393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:37 compute-0 ceph-mon[75294]: pgmap v1394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:37 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Check health
Jan 31 08:38:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:38:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2263258745' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:38:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:38:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2263258745' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:38:39 compute-0 ceph-mon[75294]: pgmap v1395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2263258745' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:38:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2263258745' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:38:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:41 compute-0 ceph-mon[75294]: pgmap v1396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:43 compute-0 ceph-mon[75294]: pgmap v1397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:45 compute-0 ceph-mon[75294]: pgmap v1398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:38:46.983 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:38:46.983 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:38:46.983 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:47 compute-0 ceph-mon[75294]: pgmap v1399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:49 compute-0 ceph-mon[75294]: pgmap v1400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:50 compute-0 podman[257312]: 2026-01-31 08:38:50.185399726 +0000 UTC m=+0.054824430 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 08:38:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:38:50
Jan 31 08:38:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:38:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:38:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Jan 31 08:38:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:38:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:51 compute-0 ceph-mon[75294]: pgmap v1401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:53 compute-0 ceph-mon[75294]: pgmap v1402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:55 compute-0 ceph-mon[75294]: pgmap v1403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:38:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:38:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:57 compute-0 podman[257334]: 2026-01-31 08:38:57.216408698 +0000 UTC m=+0.089565421 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:38:57 compute-0 ceph-mon[75294]: pgmap v1404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:59 compute-0 ceph-mon[75294]: pgmap v1405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:01 compute-0 ceph-mon[75294]: pgmap v1406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:03 compute-0 ceph-mon[75294]: pgmap v1407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:05 compute-0 ceph-mon[75294]: pgmap v1408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:39:08 compute-0 ceph-mon[75294]: pgmap v1409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:09 compute-0 ceph-mon[75294]: pgmap v1410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:10 compute-0 sudo[257361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:10 compute-0 sudo[257361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:10 compute-0 sudo[257361]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:10 compute-0 sudo[257386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:39:10 compute-0 sudo[257386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:10 compute-0 sudo[257386]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:39:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:39:11 compute-0 ceph-mon[75294]: pgmap v1411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:11 compute-0 sudo[257430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:11 compute-0 sudo[257430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:11 compute-0 sudo[257430]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:11 compute-0 sudo[257455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:39:11 compute-0 sudo[257455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:11 compute-0 sudo[257455]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:39:11 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:39:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:39:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:39:11 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:39:11 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:39:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:39:11 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:11 compute-0 sudo[257509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:11 compute-0 sudo[257509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:11 compute-0 sudo[257509]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:12 compute-0 sudo[257534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:39:12 compute-0 sudo[257534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:12 compute-0 nova_compute[240062]: 2026-01-31 08:39:12.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:12 compute-0 nova_compute[240062]: 2026-01-31 08:39:12.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:12 compute-0 podman[257570]: 2026-01-31 08:39:12.265674535 +0000 UTC m=+0.018647122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:39:12 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:12 compute-0 podman[257570]: 2026-01-31 08:39:12.650541072 +0000 UTC m=+0.403513639 container create edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brahmagupta, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:39:12 compute-0 systemd[1]: Started libpod-conmon-edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4.scope.
Jan 31 08:39:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:12 compute-0 podman[257570]: 2026-01-31 08:39:12.841570246 +0000 UTC m=+0.594542843 container init edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brahmagupta, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:39:12 compute-0 podman[257570]: 2026-01-31 08:39:12.850677002 +0000 UTC m=+0.603649569 container start edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:39:12 compute-0 hungry_brahmagupta[257586]: 167 167
Jan 31 08:39:12 compute-0 systemd[1]: libpod-edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4.scope: Deactivated successfully.
Jan 31 08:39:12 compute-0 podman[257570]: 2026-01-31 08:39:12.906110905 +0000 UTC m=+0.659083492 container attach edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:12 compute-0 podman[257570]: 2026-01-31 08:39:12.90749108 +0000 UTC m=+0.660463667 container died edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.183 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.184 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.184 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.184 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-54e35d3372f4b433aabb0669717358fd5262f830ae4133de8a6bdd579b861794-merged.mount: Deactivated successfully.
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.210 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.210 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.210 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.210 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.211 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:13 compute-0 podman[257570]: 2026-01-31 08:39:13.556186545 +0000 UTC m=+1.309159112 container remove edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:13 compute-0 systemd[1]: libpod-conmon-edf2d2dd2ff28341bc72aa7cc1ce095932718ff934fca1bfe98047a1030ba8e4.scope: Deactivated successfully.
Jan 31 08:39:13 compute-0 ceph-mon[75294]: pgmap v1412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:39:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626557398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:13 compute-0 podman[257629]: 2026-01-31 08:39:13.662557151 +0000 UTC m=+0.023299149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:13 compute-0 podman[257629]: 2026-01-31 08:39:13.758079228 +0000 UTC m=+0.118821206 container create 1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.762 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:13 compute-0 systemd[1]: Started libpod-conmon-1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1.scope.
Jan 31 08:39:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61c0be4ef6f1267b1eaccf072b4580d8e1f7d92b64871865c8c6c16fe49f4e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61c0be4ef6f1267b1eaccf072b4580d8e1f7d92b64871865c8c6c16fe49f4e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61c0be4ef6f1267b1eaccf072b4580d8e1f7d92b64871865c8c6c16fe49f4e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61c0be4ef6f1267b1eaccf072b4580d8e1f7d92b64871865c8c6c16fe49f4e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61c0be4ef6f1267b1eaccf072b4580d8e1f7d92b64871865c8c6c16fe49f4e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.938 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.941 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5097MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.941 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:13 compute-0 nova_compute[240062]: 2026-01-31 08:39:13.942 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:13 compute-0 podman[257629]: 2026-01-31 08:39:13.97326938 +0000 UTC m=+0.334011388 container init 1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_poincare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:13 compute-0 podman[257629]: 2026-01-31 08:39:13.98053842 +0000 UTC m=+0.341280398 container start 1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:14 compute-0 podman[257629]: 2026-01-31 08:39:14.059889806 +0000 UTC m=+0.420631814 container attach 1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:39:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:14 compute-0 nice_poincare[257647]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:39:14 compute-0 nice_poincare[257647]: --> All data devices are unavailable
Jan 31 08:39:14 compute-0 systemd[1]: libpod-1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1.scope: Deactivated successfully.
Jan 31 08:39:14 compute-0 podman[257667]: 2026-01-31 08:39:14.409335736 +0000 UTC m=+0.022947679 container died 1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_poincare, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:39:14 compute-0 nova_compute[240062]: 2026-01-31 08:39:14.520 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:39:14 compute-0 nova_compute[240062]: 2026-01-31 08:39:14.522 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:39:14 compute-0 nova_compute[240062]: 2026-01-31 08:39:14.537 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61c0be4ef6f1267b1eaccf072b4580d8e1f7d92b64871865c8c6c16fe49f4e0-merged.mount: Deactivated successfully.
Jan 31 08:39:14 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3626557398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:15 compute-0 podman[257667]: 2026-01-31 08:39:15.104984275 +0000 UTC m=+0.718596198 container remove 1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_poincare, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:39:15 compute-0 systemd[1]: libpod-conmon-1db32adc2fc357a5c5c118805d79bec89cc8965496b66247ef2cb56299c3a4a1.scope: Deactivated successfully.
Jan 31 08:39:15 compute-0 sudo[257534]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:15 compute-0 sudo[257702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:15 compute-0 sudo[257702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:15 compute-0 sudo[257702]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:39:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910431024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:15 compute-0 nova_compute[240062]: 2026-01-31 08:39:15.291 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:15 compute-0 sudo[257727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:39:15 compute-0 sudo[257727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:15 compute-0 nova_compute[240062]: 2026-01-31 08:39:15.297 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:39:15 compute-0 nova_compute[240062]: 2026-01-31 08:39:15.319 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:39:15 compute-0 nova_compute[240062]: 2026-01-31 08:39:15.321 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:39:15 compute-0 nova_compute[240062]: 2026-01-31 08:39:15.321 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:15 compute-0 podman[257767]: 2026-01-31 08:39:15.623440022 +0000 UTC m=+0.060331726 container create 57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:39:15 compute-0 podman[257767]: 2026-01-31 08:39:15.585060991 +0000 UTC m=+0.021952715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:15 compute-0 systemd[1]: Started libpod-conmon-57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12.scope.
Jan 31 08:39:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:15 compute-0 podman[257767]: 2026-01-31 08:39:15.741086927 +0000 UTC m=+0.177978661 container init 57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hypatia, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:39:15 compute-0 podman[257767]: 2026-01-31 08:39:15.748161713 +0000 UTC m=+0.185053417 container start 57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hypatia, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:15 compute-0 pedantic_hypatia[257784]: 167 167
Jan 31 08:39:15 compute-0 systemd[1]: libpod-57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12.scope: Deactivated successfully.
Jan 31 08:39:15 compute-0 podman[257767]: 2026-01-31 08:39:15.779465329 +0000 UTC m=+0.216357063 container attach 57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:39:15 compute-0 podman[257767]: 2026-01-31 08:39:15.780028092 +0000 UTC m=+0.216919806 container died 57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hypatia, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bffeffcd8a9515880896d12d4e6e271a7b4c868c6fdd30da8ea0e5985ee39a8-merged.mount: Deactivated successfully.
Jan 31 08:39:15 compute-0 ceph-mon[75294]: pgmap v1413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:15 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2910431024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:16 compute-0 podman[257767]: 2026-01-31 08:39:16.013798745 +0000 UTC m=+0.450690449 container remove 57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hypatia, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:16 compute-0 systemd[1]: libpod-conmon-57f67d141b03b01a6af4def5173969e01f0107da1c7125455c9fbee51949ed12.scope: Deactivated successfully.
Jan 31 08:39:16 compute-0 podman[257808]: 2026-01-31 08:39:16.16207411 +0000 UTC m=+0.053253451 container create e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cori, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:16 compute-0 systemd[1]: Started libpod-conmon-e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857.scope.
Jan 31 08:39:16 compute-0 podman[257808]: 2026-01-31 08:39:16.134124967 +0000 UTC m=+0.025304328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab9b6b96ba549f8afbe8c99a20d8e176372fe10c3ba8c87b73005bb9c45b75f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab9b6b96ba549f8afbe8c99a20d8e176372fe10c3ba8c87b73005bb9c45b75f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab9b6b96ba549f8afbe8c99a20d8e176372fe10c3ba8c87b73005bb9c45b75f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab9b6b96ba549f8afbe8c99a20d8e176372fe10c3ba8c87b73005bb9c45b75f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:16 compute-0 podman[257808]: 2026-01-31 08:39:16.316012085 +0000 UTC m=+0.207191446 container init e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:16 compute-0 podman[257808]: 2026-01-31 08:39:16.324678279 +0000 UTC m=+0.215857620 container start e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cori, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:39:16 compute-0 podman[257808]: 2026-01-31 08:39:16.346233263 +0000 UTC m=+0.237412634 container attach e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cori, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:39:16 compute-0 gracious_cori[257824]: {
Jan 31 08:39:16 compute-0 gracious_cori[257824]:     "0": [
Jan 31 08:39:16 compute-0 gracious_cori[257824]:         {
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "devices": [
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "/dev/loop3"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             ],
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_name": "ceph_lv0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_size": "21470642176",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "name": "ceph_lv0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "tags": {
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cluster_name": "ceph",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.crush_device_class": "",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.encrypted": "0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.objectstore": "bluestore",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osd_id": "0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.type": "block",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.vdo": "0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.with_tpm": "0"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             },
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "type": "block",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "vg_name": "ceph_vg0"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:         }
Jan 31 08:39:16 compute-0 gracious_cori[257824]:     ],
Jan 31 08:39:16 compute-0 gracious_cori[257824]:     "1": [
Jan 31 08:39:16 compute-0 gracious_cori[257824]:         {
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "devices": [
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "/dev/loop4"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             ],
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_name": "ceph_lv1",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_size": "21470642176",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "name": "ceph_lv1",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "tags": {
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cluster_name": "ceph",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.crush_device_class": "",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.encrypted": "0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.objectstore": "bluestore",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osd_id": "1",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.type": "block",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.vdo": "0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.with_tpm": "0"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             },
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "type": "block",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "vg_name": "ceph_vg1"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:         }
Jan 31 08:39:16 compute-0 gracious_cori[257824]:     ],
Jan 31 08:39:16 compute-0 gracious_cori[257824]:     "2": [
Jan 31 08:39:16 compute-0 gracious_cori[257824]:         {
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "devices": [
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "/dev/loop5"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             ],
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_name": "ceph_lv2",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_size": "21470642176",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "name": "ceph_lv2",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "tags": {
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.cluster_name": "ceph",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.crush_device_class": "",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.encrypted": "0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.objectstore": "bluestore",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osd_id": "2",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.type": "block",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.vdo": "0",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:                 "ceph.with_tpm": "0"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             },
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "type": "block",
Jan 31 08:39:16 compute-0 gracious_cori[257824]:             "vg_name": "ceph_vg2"
Jan 31 08:39:16 compute-0 gracious_cori[257824]:         }
Jan 31 08:39:16 compute-0 gracious_cori[257824]:     ]
Jan 31 08:39:16 compute-0 gracious_cori[257824]: }
Jan 31 08:39:16 compute-0 systemd[1]: libpod-e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857.scope: Deactivated successfully.
Jan 31 08:39:16 compute-0 podman[257808]: 2026-01-31 08:39:16.62532216 +0000 UTC m=+0.516501511 container died e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cab9b6b96ba549f8afbe8c99a20d8e176372fe10c3ba8c87b73005bb9c45b75f-merged.mount: Deactivated successfully.
Jan 31 08:39:17 compute-0 ceph-mon[75294]: pgmap v1414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:17 compute-0 podman[257808]: 2026-01-31 08:39:17.009777856 +0000 UTC m=+0.900957197 container remove e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:39:17 compute-0 systemd[1]: libpod-conmon-e09bbbb40541720d5e5a2dc2c869481f0d28e3aecf4e73a3e8f1f78b9ec36857.scope: Deactivated successfully.
Jan 31 08:39:17 compute-0 sudo[257727]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:17 compute-0 sudo[257849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:17 compute-0 sudo[257849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:17 compute-0 sudo[257849]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:17 compute-0 sudo[257874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:39:17 compute-0 sudo[257874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:17 compute-0 nova_compute[240062]: 2026-01-31 08:39:17.316 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:17 compute-0 nova_compute[240062]: 2026-01-31 08:39:17.318 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:17 compute-0 podman[257911]: 2026-01-31 08:39:17.536807346 +0000 UTC m=+0.093665702 container create ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kare, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:39:17 compute-0 podman[257911]: 2026-01-31 08:39:17.466986106 +0000 UTC m=+0.023844482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:17 compute-0 systemd[1]: Started libpod-conmon-ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c.scope.
Jan 31 08:39:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:17 compute-0 podman[257911]: 2026-01-31 08:39:17.625034483 +0000 UTC m=+0.181892869 container init ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:17 compute-0 podman[257911]: 2026-01-31 08:39:17.632705532 +0000 UTC m=+0.189563888 container start ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:39:17 compute-0 hopeful_kare[257927]: 167 167
Jan 31 08:39:17 compute-0 systemd[1]: libpod-ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c.scope: Deactivated successfully.
Jan 31 08:39:17 compute-0 podman[257911]: 2026-01-31 08:39:17.731898911 +0000 UTC m=+0.288757267 container attach ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:39:17 compute-0 podman[257911]: 2026-01-31 08:39:17.732353081 +0000 UTC m=+0.289211437 container died ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b329daa553f1ff24a1fc712390a9ef80188c9bc95a8100f4d8b163c40b196747-merged.mount: Deactivated successfully.
Jan 31 08:39:18 compute-0 podman[257911]: 2026-01-31 08:39:18.014512614 +0000 UTC m=+0.571370970 container remove ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:18 compute-0 systemd[1]: libpod-conmon-ab3e20008056a1b0c36480c2afda55760b5955ec33396a769206a1c302d4875c.scope: Deactivated successfully.
Jan 31 08:39:18 compute-0 podman[257949]: 2026-01-31 08:39:18.189629313 +0000 UTC m=+0.081710585 container create e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ride, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:39:18 compute-0 podman[257949]: 2026-01-31 08:39:18.132164469 +0000 UTC m=+0.024245761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:18 compute-0 systemd[1]: Started libpod-conmon-e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec.scope.
Jan 31 08:39:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717109eff61a9ab5127bdc589282623546a35454c3d7d3a4f1aa2dfe60397d82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717109eff61a9ab5127bdc589282623546a35454c3d7d3a4f1aa2dfe60397d82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717109eff61a9ab5127bdc589282623546a35454c3d7d3a4f1aa2dfe60397d82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717109eff61a9ab5127bdc589282623546a35454c3d7d3a4f1aa2dfe60397d82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:18 compute-0 podman[257949]: 2026-01-31 08:39:18.33395092 +0000 UTC m=+0.226032222 container init e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:39:18 compute-0 podman[257949]: 2026-01-31 08:39:18.3420056 +0000 UTC m=+0.234086862 container start e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:39:18 compute-0 podman[257949]: 2026-01-31 08:39:18.362080217 +0000 UTC m=+0.254161509 container attach e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:39:19 compute-0 lvm[258047]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:39:19 compute-0 lvm[258048]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:39:19 compute-0 lvm[258048]: VG ceph_vg1 finished
Jan 31 08:39:19 compute-0 lvm[258047]: VG ceph_vg0 finished
Jan 31 08:39:19 compute-0 lvm[258050]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:39:19 compute-0 lvm[258050]: VG ceph_vg2 finished
Jan 31 08:39:19 compute-0 nova_compute[240062]: 2026-01-31 08:39:19.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:19 compute-0 suspicious_ride[257966]: {}
Jan 31 08:39:19 compute-0 systemd[1]: libpod-e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec.scope: Deactivated successfully.
Jan 31 08:39:19 compute-0 podman[257949]: 2026-01-31 08:39:19.245224532 +0000 UTC m=+1.137305814 container died e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ride, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:39:19 compute-0 systemd[1]: libpod-e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec.scope: Consumed 1.300s CPU time.
Jan 31 08:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-717109eff61a9ab5127bdc589282623546a35454c3d7d3a4f1aa2dfe60397d82-merged.mount: Deactivated successfully.
Jan 31 08:39:19 compute-0 ceph-mon[75294]: pgmap v1415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:19 compute-0 sshd-session[257981]: Invalid user sol from 80.94.92.182 port 38876
Jan 31 08:39:19 compute-0 podman[257949]: 2026-01-31 08:39:19.429533999 +0000 UTC m=+1.321615271 container remove e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ride, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:19 compute-0 systemd[1]: libpod-conmon-e56b7929d9d047b5598f5b6c0b13d912ee97a3c7271cdb4745e21b901d0634ec.scope: Deactivated successfully.
Jan 31 08:39:19 compute-0 sudo[257874]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:39:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:39:19 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:19 compute-0 sshd-session[257981]: Connection closed by invalid user sol 80.94.92.182 port 38876 [preauth]
Jan 31 08:39:19 compute-0 sudo[258065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:39:19 compute-0 sudo[258065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:19 compute-0 sudo[258065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:20 compute-0 nova_compute[240062]: 2026-01-31 08:39:20.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:39:21 compute-0 podman[258090]: 2026-01-31 08:39:21.187369349 +0000 UTC m=+0.060386348 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:39:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:21 compute-0 ceph-mon[75294]: pgmap v1416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:23 compute-0 ceph-mon[75294]: pgmap v1417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:25 compute-0 ceph-mon[75294]: pgmap v1418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:28 compute-0 ceph-mon[75294]: pgmap v1419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:28 compute-0 podman[258110]: 2026-01-31 08:39:28.201453053 +0000 UTC m=+0.075679558 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:39:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:29 compute-0 ceph-mon[75294]: pgmap v1420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:31 compute-0 ceph-mon[75294]: pgmap v1421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:33 compute-0 ceph-mon[75294]: pgmap v1422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:35 compute-0 ceph-mon[75294]: pgmap v1423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:37 compute-0 ceph-mon[75294]: pgmap v1424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:39:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262132678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:39:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:39:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262132678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:39:39 compute-0 ceph-mon[75294]: pgmap v1425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1262132678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:39:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/1262132678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:39:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:41 compute-0 ceph-mon[75294]: pgmap v1426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:43 compute-0 ceph-mon[75294]: pgmap v1427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:45 compute-0 ceph-mon[75294]: pgmap v1428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:39:46.984 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:39:46.984 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:39:46.984 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:47 compute-0 ceph-mon[75294]: pgmap v1429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:49 compute-0 ceph-mon[75294]: pgmap v1430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:39:50
Jan 31 08:39:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:39:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:39:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Jan 31 08:39:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:39:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:51 compute-0 ceph-mon[75294]: pgmap v1431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:52 compute-0 podman[258136]: 2026-01-31 08:39:52.172458066 +0000 UTC m=+0.046168376 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:39:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:52 compute-0 ceph-mon[75294]: pgmap v1432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:55 compute-0 ceph-mon[75294]: pgmap v1433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:39:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:39:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.448002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848796448075, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1043, "num_deletes": 250, "total_data_size": 1557917, "memory_usage": 1579536, "flush_reason": "Manual Compaction"}
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848796574290, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 933647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27849, "largest_seqno": 28891, "table_properties": {"data_size": 929670, "index_size": 1629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10476, "raw_average_key_size": 20, "raw_value_size": 921099, "raw_average_value_size": 1813, "num_data_blocks": 74, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848694, "oldest_key_time": 1769848694, "file_creation_time": 1769848796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 126321 microseconds, and 2740 cpu microseconds.
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.574337) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 933647 bytes OK
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.574355) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.838372) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.838425) EVENT_LOG_v1 {"time_micros": 1769848796838416, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.838451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1553030, prev total WAL file size 1579848, number of live WAL files 2.
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.839165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(911KB)], [62(9445KB)]
Jan 31 08:39:56 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848796839259, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10606325, "oldest_snapshot_seqno": -1}
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5224 keys, 7864388 bytes, temperature: kUnknown
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848797184883, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7864388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7830798, "index_size": 19400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 129566, "raw_average_key_size": 24, "raw_value_size": 7737752, "raw_average_value_size": 1481, "num_data_blocks": 805, "num_entries": 5224, "num_filter_entries": 5224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769848796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.185109) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7864388 bytes
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.239389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.7 rd, 22.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.2 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(19.8) write-amplify(8.4) OK, records in: 5691, records dropped: 467 output_compression: NoCompression
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.239423) EVENT_LOG_v1 {"time_micros": 1769848797239408, "job": 34, "event": "compaction_finished", "compaction_time_micros": 345687, "compaction_time_cpu_micros": 16367, "output_level": 6, "num_output_files": 1, "total_output_size": 7864388, "num_input_records": 5691, "num_output_records": 5224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848797239754, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848797240545, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:56.839015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.240767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.240774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.240776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.240777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:57 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:39:57.240779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:57 compute-0 ceph-mon[75294]: pgmap v1434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:59 compute-0 podman[258155]: 2026-01-31 08:39:59.192574147 +0000 UTC m=+0.064257914 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller)
Jan 31 08:39:59 compute-0 ceph-mon[75294]: pgmap v1435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:01 compute-0 ceph-mon[75294]: pgmap v1436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:02 compute-0 ceph-mon[75294]: pgmap v1437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:05 compute-0 ceph-mon[75294]: pgmap v1438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:40:07 compute-0 ceph-mon[75294]: pgmap v1439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:09 compute-0 ceph-mon[75294]: pgmap v1440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:11 compute-0 ceph-mon[75294]: pgmap v1441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:12 compute-0 nova_compute[240062]: 2026-01-31 08:40:12.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:12 compute-0 nova_compute[240062]: 2026-01-31 08:40:12.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:13 compute-0 ceph-mon[75294]: pgmap v1442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.154 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.281 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.281 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.281 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.282 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.282 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:40:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295988292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.780 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.890 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.891 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.891 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:13 compute-0 nova_compute[240062]: 2026-01-31 08:40:13.892 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:14 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/295988292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.253 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.253 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.269 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:40:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:40:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240836900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.798 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.802 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.851 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.853 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:40:14 compute-0 nova_compute[240062]: 2026-01-31 08:40:14.853 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:15 compute-0 ceph-mon[75294]: pgmap v1443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:40:15 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4240836900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:15 compute-0 nova_compute[240062]: 2026-01-31 08:40:15.854 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:15 compute-0 nova_compute[240062]: 2026-01-31 08:40:15.855 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:40:15 compute-0 nova_compute[240062]: 2026-01-31 08:40:15.855 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:40:15 compute-0 nova_compute[240062]: 2026-01-31 08:40:15.890 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:40:16 compute-0 nova_compute[240062]: 2026-01-31 08:40:16.184 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:40:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:17 compute-0 ceph-mon[75294]: pgmap v1444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:40:18 compute-0 nova_compute[240062]: 2026-01-31 08:40:18.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:40:19 compute-0 sudo[258225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:19 compute-0 sudo[258225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:19 compute-0 sudo[258225]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:19 compute-0 ceph-mon[75294]: pgmap v1445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 08:40:19 compute-0 sudo[258250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:40:19 compute-0 sudo[258250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:20 compute-0 nova_compute[240062]: 2026-01-31 08:40:20.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:20 compute-0 nova_compute[240062]: 2026-01-31 08:40:20.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:20 compute-0 sudo[258250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:40:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:40:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:40:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:40:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:40:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:40:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:40:20 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:40:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Jan 31 08:40:20 compute-0 sudo[258306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:20 compute-0 sudo[258306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:20 compute-0 sudo[258306]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:20 compute-0 sudo[258331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:40:20 compute-0 sudo[258331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:20 compute-0 podman[258368]: 2026-01-31 08:40:20.664273744 +0000 UTC m=+0.100574063 container create 407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brown, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:40:20 compute-0 podman[258368]: 2026-01-31 08:40:20.586020845 +0000 UTC m=+0.022321184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:40:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:40:20 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:20 compute-0 systemd[1]: Started libpod-conmon-407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77.scope.
Jan 31 08:40:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:20 compute-0 podman[258368]: 2026-01-31 08:40:20.872813692 +0000 UTC m=+0.309114031 container init 407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:40:20 compute-0 podman[258368]: 2026-01-31 08:40:20.878163414 +0000 UTC m=+0.314463733 container start 407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:20 compute-0 cranky_brown[258384]: 167 167
Jan 31 08:40:20 compute-0 systemd[1]: libpod-407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77.scope: Deactivated successfully.
Jan 31 08:40:20 compute-0 podman[258368]: 2026-01-31 08:40:20.908091536 +0000 UTC m=+0.344391855 container attach 407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:40:20 compute-0 podman[258368]: 2026-01-31 08:40:20.908399284 +0000 UTC m=+0.344699593 container died 407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1e1281b8051642f6fbe38bdea2a792b74eee87d1c9239f6380c3c9eb1fa594c-merged.mount: Deactivated successfully.
Jan 31 08:40:21 compute-0 podman[258368]: 2026-01-31 08:40:21.173889272 +0000 UTC m=+0.610189591 container remove 407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:21 compute-0 systemd[1]: libpod-conmon-407ce0c732d3f602431ac6d080fb8cf62688236d0d01a2b91fa544f55f6a1c77.scope: Deactivated successfully.
Jan 31 08:40:21 compute-0 podman[258411]: 2026-01-31 08:40:21.317028979 +0000 UTC m=+0.051391194 container create a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mendeleev, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:40:21 compute-0 systemd[1]: Started libpod-conmon-a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145.scope.
Jan 31 08:40:21 compute-0 podman[258411]: 2026-01-31 08:40:21.28763222 +0000 UTC m=+0.021994455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48638e25cc4674b6d959273d2d3bfc9bab1888e39a78d3a8653d1cc9dc62975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48638e25cc4674b6d959273d2d3bfc9bab1888e39a78d3a8653d1cc9dc62975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48638e25cc4674b6d959273d2d3bfc9bab1888e39a78d3a8653d1cc9dc62975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48638e25cc4674b6d959273d2d3bfc9bab1888e39a78d3a8653d1cc9dc62975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48638e25cc4674b6d959273d2d3bfc9bab1888e39a78d3a8653d1cc9dc62975/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:21 compute-0 podman[258411]: 2026-01-31 08:40:21.433488915 +0000 UTC m=+0.167851150 container init a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mendeleev, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:40:21 compute-0 podman[258411]: 2026-01-31 08:40:21.440103728 +0000 UTC m=+0.174465943 container start a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:40:21 compute-0 podman[258411]: 2026-01-31 08:40:21.479884765 +0000 UTC m=+0.214246980 container attach a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:40:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:21 compute-0 ceph-mon[75294]: pgmap v1446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Jan 31 08:40:21 compute-0 dazzling_mendeleev[258427]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:40:21 compute-0 dazzling_mendeleev[258427]: --> All data devices are unavailable
Jan 31 08:40:21 compute-0 systemd[1]: libpod-a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145.scope: Deactivated successfully.
Jan 31 08:40:21 compute-0 podman[258411]: 2026-01-31 08:40:21.875821296 +0000 UTC m=+0.610183531 container died a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a48638e25cc4674b6d959273d2d3bfc9bab1888e39a78d3a8653d1cc9dc62975-merged.mount: Deactivated successfully.
Jan 31 08:40:22 compute-0 podman[258411]: 2026-01-31 08:40:22.028886599 +0000 UTC m=+0.763248824 container remove a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:40:22 compute-0 systemd[1]: libpod-conmon-a7ed0dcae6e6be04d367db3807b1b75ce58c8d176f78a301b20e8d5ed1db0145.scope: Deactivated successfully.
Jan 31 08:40:22 compute-0 sudo[258331]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:22 compute-0 sudo[258458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:22 compute-0 sudo[258458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:22 compute-0 sudo[258458]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:22 compute-0 sudo[258483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:40:22 compute-0 sudo[258483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Jan 31 08:40:22 compute-0 podman[258507]: 2026-01-31 08:40:22.279870208 +0000 UTC m=+0.056869960 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:40:22 compute-0 podman[258541]: 2026-01-31 08:40:22.469181719 +0000 UTC m=+0.019702979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:22 compute-0 podman[258541]: 2026-01-31 08:40:22.578010597 +0000 UTC m=+0.128531837 container create 05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:22 compute-0 systemd[1]: Started libpod-conmon-05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4.scope.
Jan 31 08:40:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:23 compute-0 podman[258541]: 2026-01-31 08:40:23.02890658 +0000 UTC m=+0.579427850 container init 05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_payne, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:40:23 compute-0 podman[258541]: 2026-01-31 08:40:23.034835497 +0000 UTC m=+0.585356737 container start 05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:23 compute-0 intelligent_payne[258558]: 167 167
Jan 31 08:40:23 compute-0 systemd[1]: libpod-05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4.scope: Deactivated successfully.
Jan 31 08:40:23 compute-0 podman[258541]: 2026-01-31 08:40:23.092106366 +0000 UTC m=+0.642627606 container attach 05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:40:23 compute-0 podman[258541]: 2026-01-31 08:40:23.092514096 +0000 UTC m=+0.643035336 container died 05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_payne, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:40:23 compute-0 ceph-mon[75294]: pgmap v1447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Jan 31 08:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c71ecaded0cd13ae3d2b82b3421a27d8cbf3e7254ef95b53f560688d110905e-merged.mount: Deactivated successfully.
Jan 31 08:40:23 compute-0 podman[258541]: 2026-01-31 08:40:23.860912877 +0000 UTC m=+1.411434127 container remove 05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:40:23 compute-0 systemd[1]: libpod-conmon-05e6e894c079c2a3b2591c6b41b00cb7f85be4f612e9af592a5316f100ca5ea4.scope: Deactivated successfully.
Jan 31 08:40:24 compute-0 podman[258583]: 2026-01-31 08:40:24.056591997 +0000 UTC m=+0.110697714 container create 0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:40:24 compute-0 podman[258583]: 2026-01-31 08:40:23.968027292 +0000 UTC m=+0.022133029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:24 compute-0 systemd[1]: Started libpod-conmon-0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34.scope.
Jan 31 08:40:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46368dcdad05c32a4634f26b42c4542c807d22e6b3584c595ffa3a4f2175802a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46368dcdad05c32a4634f26b42c4542c807d22e6b3584c595ffa3a4f2175802a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46368dcdad05c32a4634f26b42c4542c807d22e6b3584c595ffa3a4f2175802a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46368dcdad05c32a4634f26b42c4542c807d22e6b3584c595ffa3a4f2175802a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 31 08:40:24 compute-0 podman[258583]: 2026-01-31 08:40:24.471895339 +0000 UTC m=+0.526001056 container init 0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_roentgen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:40:24 compute-0 podman[258583]: 2026-01-31 08:40:24.478911572 +0000 UTC m=+0.533017289 container start 0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]: {
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:     "0": [
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:         {
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "devices": [
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "/dev/loop3"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             ],
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_name": "ceph_lv0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_size": "21470642176",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "name": "ceph_lv0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "tags": {
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cluster_name": "ceph",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.crush_device_class": "",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.encrypted": "0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.objectstore": "bluestore",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osd_id": "0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.type": "block",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.vdo": "0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.with_tpm": "0"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             },
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "type": "block",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "vg_name": "ceph_vg0"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:         }
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:     ],
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:     "1": [
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:         {
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "devices": [
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "/dev/loop4"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             ],
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_name": "ceph_lv1",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_size": "21470642176",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "name": "ceph_lv1",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "tags": {
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cluster_name": "ceph",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.crush_device_class": "",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.encrypted": "0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.objectstore": "bluestore",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osd_id": "1",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.type": "block",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.vdo": "0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.with_tpm": "0"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             },
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "type": "block",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "vg_name": "ceph_vg1"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:         }
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:     ],
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:     "2": [
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:         {
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "devices": [
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "/dev/loop5"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             ],
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_name": "ceph_lv2",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_size": "21470642176",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "name": "ceph_lv2",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "tags": {
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.cluster_name": "ceph",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.crush_device_class": "",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.encrypted": "0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.objectstore": "bluestore",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osd_id": "2",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.type": "block",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.vdo": "0",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:                 "ceph.with_tpm": "0"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             },
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "type": "block",
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:             "vg_name": "ceph_vg2"
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:         }
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]:     ]
Jan 31 08:40:24 compute-0 heuristic_roentgen[258599]: }
Jan 31 08:40:24 compute-0 systemd[1]: libpod-0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34.scope: Deactivated successfully.
Jan 31 08:40:24 compute-0 podman[258583]: 2026-01-31 08:40:24.791604491 +0000 UTC m=+0.845710208 container attach 0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_roentgen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:40:24 compute-0 podman[258583]: 2026-01-31 08:40:24.792570545 +0000 UTC m=+0.846676262 container died 0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:40:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-46368dcdad05c32a4634f26b42c4542c807d22e6b3584c595ffa3a4f2175802a-merged.mount: Deactivated successfully.
Jan 31 08:40:25 compute-0 ceph-mon[75294]: pgmap v1448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 31 08:40:26 compute-0 podman[258583]: 2026-01-31 08:40:26.003981384 +0000 UTC m=+2.058087101 container remove 0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:40:26 compute-0 sudo[258483]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:26 compute-0 sudo[258620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:26 compute-0 sudo[258620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:26 compute-0 sudo[258620]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:26 compute-0 systemd[1]: libpod-conmon-0bae7bf3e673fd6047e3b7bd3136b622c1febb33957d6667743c2562258b4b34.scope: Deactivated successfully.
Jan 31 08:40:26 compute-0 sudo[258645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:40:26 compute-0 sudo[258645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 31 08:40:26 compute-0 podman[258682]: 2026-01-31 08:40:26.455149574 +0000 UTC m=+0.091805915 container create ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elion, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:40:26 compute-0 podman[258682]: 2026-01-31 08:40:26.380543546 +0000 UTC m=+0.017199887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:26 compute-0 systemd[1]: Started libpod-conmon-ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164.scope.
Jan 31 08:40:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:26 compute-0 podman[258682]: 2026-01-31 08:40:26.63091618 +0000 UTC m=+0.267572551 container init ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 08:40:26 compute-0 podman[258682]: 2026-01-31 08:40:26.637323268 +0000 UTC m=+0.273979609 container start ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:40:26 compute-0 infallible_elion[258698]: 167 167
Jan 31 08:40:26 compute-0 systemd[1]: libpod-ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164.scope: Deactivated successfully.
Jan 31 08:40:26 compute-0 podman[258682]: 2026-01-31 08:40:26.722414107 +0000 UTC m=+0.359070448 container attach ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elion, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:40:26 compute-0 podman[258682]: 2026-01-31 08:40:26.7229332 +0000 UTC m=+0.359589541 container died ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7a788198a8491518f185d4686accb002734526b7ec85f7c265962784356d91-merged.mount: Deactivated successfully.
Jan 31 08:40:27 compute-0 ceph-mon[75294]: pgmap v1449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 31 08:40:28 compute-0 podman[258682]: 2026-01-31 08:40:28.193323017 +0000 UTC m=+1.829979358 container remove ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elion, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:40:28 compute-0 systemd[1]: libpod-conmon-ab35448f4b43bb1129685142d062b541bc41aafd63ae2373475cc5e85b179164.scope: Deactivated successfully.
Jan 31 08:40:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 08:40:28 compute-0 podman[258723]: 2026-01-31 08:40:28.309122546 +0000 UTC m=+0.019615747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:28 compute-0 podman[258723]: 2026-01-31 08:40:28.656830982 +0000 UTC m=+0.367324163 container create b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:40:28 compute-0 systemd[1]: Started libpod-conmon-b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f.scope.
Jan 31 08:40:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc5bc235c5b2215200723c700d59ea5b207f1ebba1a85fca1d53c51860318024/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc5bc235c5b2215200723c700d59ea5b207f1ebba1a85fca1d53c51860318024/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc5bc235c5b2215200723c700d59ea5b207f1ebba1a85fca1d53c51860318024/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc5bc235c5b2215200723c700d59ea5b207f1ebba1a85fca1d53c51860318024/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:29 compute-0 podman[258723]: 2026-01-31 08:40:29.195387238 +0000 UTC m=+0.905880439 container init b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gates, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:40:29 compute-0 podman[258723]: 2026-01-31 08:40:29.201000317 +0000 UTC m=+0.911493498 container start b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gates, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:40:29 compute-0 podman[258723]: 2026-01-31 08:40:29.314340996 +0000 UTC m=+1.024834167 container attach b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:40:29 compute-0 lvm[258824]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:40:29 compute-0 lvm[258823]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:40:29 compute-0 lvm[258823]: VG ceph_vg0 finished
Jan 31 08:40:29 compute-0 lvm[258824]: VG ceph_vg1 finished
Jan 31 08:40:29 compute-0 lvm[258830]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:40:29 compute-0 lvm[258830]: VG ceph_vg2 finished
Jan 31 08:40:29 compute-0 podman[258814]: 2026-01-31 08:40:29.798247468 +0000 UTC m=+0.068038278 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:40:29 compute-0 practical_gates[258739]: {}
Jan 31 08:40:29 compute-0 systemd[1]: libpod-b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f.scope: Deactivated successfully.
Jan 31 08:40:29 compute-0 podman[258723]: 2026-01-31 08:40:29.902608543 +0000 UTC m=+1.613101724 container died b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:40:29 compute-0 ceph-mon[75294]: pgmap v1450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 08:40:30 compute-0 nova_compute[240062]: 2026-01-31 08:40:30.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc5bc235c5b2215200723c700d59ea5b207f1ebba1a85fca1d53c51860318024-merged.mount: Deactivated successfully.
Jan 31 08:40:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Jan 31 08:40:30 compute-0 podman[258723]: 2026-01-31 08:40:30.496268174 +0000 UTC m=+2.206761355 container remove b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gates, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:40:30 compute-0 systemd[1]: libpod-conmon-b012df32e442035d3644c843e6fbe673f4f2bc6347239e6826f31142ff5f572f.scope: Deactivated successfully.
Jan 31 08:40:30 compute-0 sudo[258645]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:40:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:40:30 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:40:30 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:40:30 compute-0 sudo[258861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:40:30 compute-0 sudo[258861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:31 compute-0 sudo[258861]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:31 compute-0 ceph-mon[75294]: pgmap v1451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Jan 31 08:40:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:40:31 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:40:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Jan 31 08:40:33 compute-0 ceph-mon[75294]: pgmap v1452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Jan 31 08:40:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Jan 31 08:40:35 compute-0 ceph-mon[75294]: pgmap v1453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Jan 31 08:40:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:40:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:37 compute-0 ceph-mon[75294]: pgmap v1454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:40:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:40:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:40:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/290820350' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:40:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:40:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/290820350' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:40:39 compute-0 ceph-mon[75294]: pgmap v1455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:40:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:40:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/290820350' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:40:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/290820350' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:40:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:41 compute-0 ceph-mon[75294]: pgmap v1456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:40:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:44 compute-0 ceph-mon[75294]: pgmap v1457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:45 compute-0 ceph-mon[75294]: pgmap v1458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:40:46.984 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:40:46.985 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:40:46.985 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:47 compute-0 ceph-mon[75294]: pgmap v1459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:49 compute-0 ceph-mon[75294]: pgmap v1460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:40:50
Jan 31 08:40:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:40:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:40:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', 'backups']
Jan 31 08:40:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:40:51 compute-0 ceph-mon[75294]: pgmap v1461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:53 compute-0 podman[258886]: 2026-01-31 08:40:53.199197874 +0000 UTC m=+0.067443172 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:40:53 compute-0 ceph-mon[75294]: pgmap v1462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:55 compute-0 ceph-mon[75294]: pgmap v1463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:55 compute-0 sshd-session[258907]: Accepted publickey for zuul from 192.168.122.30 port 47178 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:40:55 compute-0 systemd-logind[810]: New session 53 of user zuul.
Jan 31 08:40:55 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 31 08:40:55 compute-0 sshd-session[258907]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:40:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:40:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:56 compute-0 sudo[258980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain iscsid.service
Jan 31 08:40:56 compute-0 sudo[258980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:40:56 compute-0 sudo[258980]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:56 compute-0 sudo[259005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_nova_compute.service
Jan 31 08:40:56 compute-0 sudo[259005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:40:56 compute-0 sudo[259005]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:56 compute-0 sudo[259030]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_controller.service
Jan 31 08:40:56 compute-0 sudo[259030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:40:56 compute-0 sudo[259030]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:57 compute-0 sudo[259055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_metadata_agent.service
Jan 31 08:40:57 compute-0 sudo[259055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:40:57 compute-0 sudo[259055]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:57 compute-0 ceph-mon[75294]: pgmap v1464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:59 compute-0 ceph-mon[75294]: pgmap v1465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:00 compute-0 podman[259080]: 2026-01-31 08:41:00.211702387 +0000 UTC m=+0.080202948 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:41:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:01 compute-0 ceph-mon[75294]: pgmap v1466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:02 compute-0 ceph-mon[75294]: pgmap v1467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:05 compute-0 ceph-mon[75294]: pgmap v1468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:05 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:41:05.873 155810 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:b9:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '62:58:2f:a4:b2:e1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:41:05 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:41:05.874 155810 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:41:07 compute-0 ceph-mon[75294]: pgmap v1469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:09 compute-0 ceph-mon[75294]: pgmap v1470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:10 compute-0 sshd-session[259106]: Accepted publickey for zuul from 192.168.122.30 port 50426 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:41:10 compute-0 systemd-logind[810]: New session 54 of user zuul.
Jan 31 08:41:10 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 31 08:41:10 compute-0 sshd-session[259106]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:41:10 compute-0 sudo[259179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Jan 31 08:41:10 compute-0 sudo[259179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:10 compute-0 sudo[259179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:10 compute-0 sudo[259205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Jan 31 08:41:10 compute-0 sudo[259205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 groupadd[259207]: group added to /etc/group: name=podman, GID=42479
Jan 31 08:41:11 compute-0 groupadd[259207]: group added to /etc/gshadow: name=podman
Jan 31 08:41:11 compute-0 groupadd[259207]: new group: name=podman, GID=42479
Jan 31 08:41:11 compute-0 sudo[259205]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Jan 31 08:41:11 compute-0 sudo[259213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 usermod[259215]: add 'zuul' to group 'podman'
Jan 31 08:41:11 compute-0 usermod[259215]: add 'zuul' to shadow group 'podman'
Jan 31 08:41:11 compute-0 sudo[259213]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Jan 31 08:41:11 compute-0 sudo[259222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 sudo[259222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Jan 31 08:41:11 compute-0 sudo[259225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 sudo[259225]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Jan 31 08:41:11 compute-0 sudo[259228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 sudo[259228]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Jan 31 08:41:11 compute-0 sudo[259231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 sudo[259231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Jan 31 08:41:11 compute-0 sudo[259234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 sudo[259234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Jan 31 08:41:11 compute-0 sudo[259237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 systemd[1]: Reloading.
Jan 31 08:41:11 compute-0 systemd-rc-local-generator[259260]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:41:11 compute-0 systemd-sysv-generator[259266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:41:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:11 compute-0 sudo[259237]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Jan 31 08:41:11 compute-0 sudo[259274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 ceph-mon[75294]: pgmap v1471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:11 compute-0 sudo[259274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:11 compute-0 sudo[259277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Jan 31 08:41:11 compute-0 sudo[259277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:11 compute-0 systemd[1]: Reloading.
Jan 31 08:41:12 compute-0 systemd-rc-local-generator[259306]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:41:12 compute-0 systemd-sysv-generator[259310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:41:12 compute-0 systemd[1]: Starting Podman API Socket...
Jan 31 08:41:12 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 31 08:41:12 compute-0 sudo[259277]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 sudo[259315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Jan 31 08:41:12 compute-0 sudo[259315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:12 compute-0 sudo[259315]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 sudo[259318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Jan 31 08:41:12 compute-0 sudo[259318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:12 compute-0 sudo[259318]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:12 compute-0 sudo[259321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Jan 31 08:41:12 compute-0 sudo[259321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:12 compute-0 sudo[259321]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 sudo[259324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Jan 31 08:41:12 compute-0 sudo[259324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:12 compute-0 sudo[259324]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 sudo[259327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Jan 31 08:41:12 compute-0 sudo[259327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:12 compute-0 sudo[259327]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 sudo[259330]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Jan 31 08:41:12 compute-0 dbus-broker-launch[790]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Jan 31 08:41:12 compute-0 sudo[259330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:12 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Jan 31 08:41:12 compute-0 systemd[1]: Closed Podman API Socket.
Jan 31 08:41:12 compute-0 systemd[1]: Stopping Podman API Socket...
Jan 31 08:41:12 compute-0 systemd[1]: Starting Podman API Socket...
Jan 31 08:41:12 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 31 08:41:12 compute-0 sudo[259330]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 sudo[259182]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Jan 31 08:41:12 compute-0 sudo[259182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:12 compute-0 sudo[259182]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:12 compute-0 sshd-session[259336]: Accepted publickey for zuul from 192.168.122.30 port 50434 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:41:12 compute-0 systemd-logind[810]: New session 55 of user zuul.
Jan 31 08:41:12 compute-0 systemd[1]: Started Session 55 of User zuul.
Jan 31 08:41:12 compute-0 sshd-session[259336]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:41:12 compute-0 systemd[1]: Starting Podman API Service...
Jan 31 08:41:12 compute-0 systemd[1]: Started Podman API Service.
Jan 31 08:41:12 compute-0 podman[259340]: time="2026-01-31T08:41:12Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 31 08:41:12 compute-0 podman[259340]: time="2026-01-31T08:41:12Z" level=info msg="Setting parallel job count to 25"
Jan 31 08:41:12 compute-0 podman[259340]: time="2026-01-31T08:41:12Z" level=info msg="Using sqlite as database backend"
Jan 31 08:41:12 compute-0 podman[259340]: time="2026-01-31T08:41:12Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 31 08:41:12 compute-0 podman[259340]: time="2026-01-31T08:41:12Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 31 08:41:12 compute-0 podman[259340]: time="2026-01-31T08:41:12Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 31 08:41:12 compute-0 podman[259340]: @ - - [31/Jan/2026:08:41:12 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 08:41:12 compute-0 podman[259340]: @ - - [31/Jan/2026:08:41:12 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22534 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 08:41:12 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:41:12.875 155810 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41f56c18-6e96-48c3-b4a0-6aca47eb55b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:12 compute-0 ceph-mon[75294]: pgmap v1472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.278 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.278 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.278 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.279 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.279 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:13 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:41:13 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504065324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.787 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.942 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.943 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5091MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.943 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:13 compute-0 nova_compute[240062]: 2026-01-31 08:41:13.944 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:14 compute-0 nova_compute[240062]: 2026-01-31 08:41:14.387 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:41:14 compute-0 nova_compute[240062]: 2026-01-31 08:41:14.388 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:41:14 compute-0 nova_compute[240062]: 2026-01-31 08:41:14.406 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:14 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2504065324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:41:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366007118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:14 compute-0 nova_compute[240062]: 2026-01-31 08:41:14.982 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:14 compute-0 nova_compute[240062]: 2026-01-31 08:41:14.988 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:41:15 compute-0 nova_compute[240062]: 2026-01-31 08:41:15.174 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:41:15 compute-0 nova_compute[240062]: 2026-01-31 08:41:15.176 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:41:15 compute-0 nova_compute[240062]: 2026-01-31 08:41:15.176 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:15 compute-0 nova_compute[240062]: 2026-01-31 08:41:15.177 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:15 compute-0 nova_compute[240062]: 2026-01-31 08:41:15.177 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:41:15 compute-0 ceph-mon[75294]: pgmap v1473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:15 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/366007118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:16 compute-0 nova_compute[240062]: 2026-01-31 08:41:16.251 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:16 compute-0 nova_compute[240062]: 2026-01-31 08:41:16.251 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:41:16 compute-0 nova_compute[240062]: 2026-01-31 08:41:16.252 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:41:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:16 compute-0 nova_compute[240062]: 2026-01-31 08:41:16.385 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:41:16 compute-0 nova_compute[240062]: 2026-01-31 08:41:16.386 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:17 compute-0 ceph-mon[75294]: pgmap v1474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:18 compute-0 nova_compute[240062]: 2026-01-31 08:41:18.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:18 compute-0 nova_compute[240062]: 2026-01-31 08:41:18.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:19 compute-0 ceph-mon[75294]: pgmap v1475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:20 compute-0 nova_compute[240062]: 2026-01-31 08:41:20.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:20 compute-0 ceph-mon[75294]: pgmap v1476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:21 compute-0 nova_compute[240062]: 2026-01-31 08:41:21.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:22 compute-0 nova_compute[240062]: 2026-01-31 08:41:22.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:22 compute-0 nova_compute[240062]: 2026-01-31 08:41:22.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:41:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:22 compute-0 nova_compute[240062]: 2026-01-31 08:41:22.471 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:41:23 compute-0 ceph-mon[75294]: pgmap v1477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:24 compute-0 podman[259398]: 2026-01-31 08:41:24.174117536 +0000 UTC m=+0.045608272 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 08:41:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:25 compute-0 nova_compute[240062]: 2026-01-31 08:41:25.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:25 compute-0 ceph-mon[75294]: pgmap v1478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:27 compute-0 ceph-mon[75294]: pgmap v1479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:27 compute-0 podman[259340]: time="2026-01-31T08:41:27Z" level=info msg="Received shutdown.Stop(), terminating!" PID=259340
Jan 31 08:41:27 compute-0 systemd[1]: podman.service: Deactivated successfully.
Jan 31 08:41:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:29 compute-0 ceph-mon[75294]: pgmap v1480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:31 compute-0 sudo[259419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:31 compute-0 sudo[259419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:31 compute-0 sudo[259419]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:31 compute-0 sudo[259450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:41:31 compute-0 sudo[259450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:31 compute-0 podman[259443]: 2026-01-31 08:41:31.130267962 +0000 UTC m=+0.061717480 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:41:31 compute-0 ceph-mon[75294]: pgmap v1481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:31 compute-0 sudo[259450]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:41:31 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:41:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:41:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:41:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:41:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:41:31 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:41:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:41:31 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:41:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:41:31 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:31 compute-0 sudo[259526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:31 compute-0 sudo[259526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:31 compute-0 sudo[259526]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:31 compute-0 sudo[259551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:41:31 compute-0 sudo[259551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:32 compute-0 podman[259588]: 2026-01-31 08:41:32.23584962 +0000 UTC m=+0.021663068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:32 compute-0 podman[259588]: 2026-01-31 08:41:32.340501853 +0000 UTC m=+0.126315281 container create 2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:41:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:41:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:41:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:41:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:41:32 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:32 compute-0 systemd[1]: Started libpod-conmon-2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b.scope.
Jan 31 08:41:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:32 compute-0 podman[259588]: 2026-01-31 08:41:32.63213477 +0000 UTC m=+0.417948228 container init 2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dewdney, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:41:32 compute-0 podman[259588]: 2026-01-31 08:41:32.638052547 +0000 UTC m=+0.423865965 container start 2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dewdney, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:41:32 compute-0 magical_dewdney[259605]: 167 167
Jan 31 08:41:32 compute-0 systemd[1]: libpod-2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b.scope: Deactivated successfully.
Jan 31 08:41:32 compute-0 podman[259588]: 2026-01-31 08:41:32.816013246 +0000 UTC m=+0.601826664 container attach 2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:41:32 compute-0 podman[259588]: 2026-01-31 08:41:32.816358154 +0000 UTC m=+0.602171582 container died 2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:41:32 compute-0 sshd-session[259608]: Invalid user solana from 193.32.162.145 port 38898
Jan 31 08:41:33 compute-0 sshd-session[259608]: Connection closed by invalid user solana 193.32.162.145 port 38898 [preauth]
Jan 31 08:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8d6e10f79526ec0a4e29d3c9f505cc3db050a252960564d11d6c4a999a07863-merged.mount: Deactivated successfully.
Jan 31 08:41:33 compute-0 ceph-mon[75294]: pgmap v1482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:34 compute-0 podman[259588]: 2026-01-31 08:41:34.668963383 +0000 UTC m=+2.454776811 container remove 2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dewdney, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:41:34 compute-0 systemd[1]: libpod-conmon-2a14fba89cbfd7d918a11923fc83eb30acb2f01eec0a0000cae9bfeeb11a965b.scope: Deactivated successfully.
Jan 31 08:41:34 compute-0 podman[259630]: 2026-01-31 08:41:34.766758687 +0000 UTC m=+0.020385106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:35 compute-0 podman[259630]: 2026-01-31 08:41:35.112633087 +0000 UTC m=+0.366259496 container create 48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:41:35 compute-0 ceph-mon[75294]: pgmap v1483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:35 compute-0 systemd[1]: Started libpod-conmon-48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e.scope.
Jan 31 08:41:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d438b9e531d415beb020fe09f1aac53ced46d9c3ae2c80ac49cbc6da081709/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d438b9e531d415beb020fe09f1aac53ced46d9c3ae2c80ac49cbc6da081709/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d438b9e531d415beb020fe09f1aac53ced46d9c3ae2c80ac49cbc6da081709/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d438b9e531d415beb020fe09f1aac53ced46d9c3ae2c80ac49cbc6da081709/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d438b9e531d415beb020fe09f1aac53ced46d9c3ae2c80ac49cbc6da081709/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:35 compute-0 podman[259630]: 2026-01-31 08:41:35.378866745 +0000 UTC m=+0.632493174 container init 48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:41:35 compute-0 podman[259630]: 2026-01-31 08:41:35.385218312 +0000 UTC m=+0.638844711 container start 48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:41:35 compute-0 podman[259630]: 2026-01-31 08:41:35.423701736 +0000 UTC m=+0.677328155 container attach 48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:41:35 compute-0 vigilant_wilbur[259647]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:41:35 compute-0 vigilant_wilbur[259647]: --> All data devices are unavailable
Jan 31 08:41:35 compute-0 systemd[1]: libpod-48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e.scope: Deactivated successfully.
Jan 31 08:41:35 compute-0 podman[259630]: 2026-01-31 08:41:35.80320923 +0000 UTC m=+1.056835629 container died 48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-87d438b9e531d415beb020fe09f1aac53ced46d9c3ae2c80ac49cbc6da081709-merged.mount: Deactivated successfully.
Jan 31 08:41:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:36 compute-0 podman[259630]: 2026-01-31 08:41:36.931748755 +0000 UTC m=+2.185375154 container remove 48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_wilbur, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:41:36 compute-0 systemd[1]: libpod-conmon-48ead5f4e70fb03552ec712742cd9bc748943e39bc6c1f0ff525ead439e4754e.scope: Deactivated successfully.
Jan 31 08:41:36 compute-0 sudo[259551]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:37 compute-0 sudo[259678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:37 compute-0 sudo[259678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:37 compute-0 sudo[259678]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:37 compute-0 sudo[259703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:41:37 compute-0 sudo[259703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:37 compute-0 podman[259740]: 2026-01-31 08:41:37.362264224 +0000 UTC m=+0.080781423 container create 92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:41:37 compute-0 podman[259740]: 2026-01-31 08:41:37.300942395 +0000 UTC m=+0.019459624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:37 compute-0 systemd[1]: Started libpod-conmon-92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b.scope.
Jan 31 08:41:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:37 compute-0 podman[259740]: 2026-01-31 08:41:37.488034571 +0000 UTC m=+0.206551790 container init 92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:41:37 compute-0 podman[259740]: 2026-01-31 08:41:37.492996083 +0000 UTC m=+0.211513282 container start 92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_faraday, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:41:37 compute-0 confident_faraday[259756]: 167 167
Jan 31 08:41:37 compute-0 systemd[1]: libpod-92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b.scope: Deactivated successfully.
Jan 31 08:41:37 compute-0 podman[259740]: 2026-01-31 08:41:37.529423066 +0000 UTC m=+0.247940305 container attach 92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:41:37 compute-0 podman[259740]: 2026-01-31 08:41:37.529744585 +0000 UTC m=+0.248261784 container died 92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:41:37 compute-0 ceph-mon[75294]: pgmap v1484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-015e53dbc4c225dfb50a1eeb93eae81a82e319c53e5b2fac19c1b90aca9ae8be-merged.mount: Deactivated successfully.
Jan 31 08:41:37 compute-0 podman[259740]: 2026-01-31 08:41:37.888756611 +0000 UTC m=+0.607273810 container remove 92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:41:37 compute-0 systemd[1]: libpod-conmon-92c72d72bb9901806c08cc0984337a48c3a5e8db517b15b2b58f110da005d41b.scope: Deactivated successfully.
Jan 31 08:41:38 compute-0 podman[259779]: 2026-01-31 08:41:38.077854617 +0000 UTC m=+0.104678785 container create abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:41:38 compute-0 podman[259779]: 2026-01-31 08:41:37.992897071 +0000 UTC m=+0.019721259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:38 compute-0 systemd[1]: Started libpod-conmon-abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700.scope.
Jan 31 08:41:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecac5297bf34561c352244c781a1437cbc97841ffcd0c195c5b4b335c3fd66c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecac5297bf34561c352244c781a1437cbc97841ffcd0c195c5b4b335c3fd66c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecac5297bf34561c352244c781a1437cbc97841ffcd0c195c5b4b335c3fd66c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecac5297bf34561c352244c781a1437cbc97841ffcd0c195c5b4b335c3fd66c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:38 compute-0 podman[259779]: 2026-01-31 08:41:38.243360358 +0000 UTC m=+0.270184526 container init abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:41:38 compute-0 podman[259779]: 2026-01-31 08:41:38.249756147 +0000 UTC m=+0.276580305 container start abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:41:38 compute-0 podman[259779]: 2026-01-31 08:41:38.298976237 +0000 UTC m=+0.325800415 container attach abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:41:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:38 compute-0 epic_carson[259795]: {
Jan 31 08:41:38 compute-0 epic_carson[259795]:     "0": [
Jan 31 08:41:38 compute-0 epic_carson[259795]:         {
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "devices": [
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "/dev/loop3"
Jan 31 08:41:38 compute-0 epic_carson[259795]:             ],
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_name": "ceph_lv0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_size": "21470642176",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "name": "ceph_lv0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "tags": {
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cluster_name": "ceph",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.crush_device_class": "",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.encrypted": "0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.objectstore": "bluestore",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osd_id": "0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.type": "block",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.vdo": "0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.with_tpm": "0"
Jan 31 08:41:38 compute-0 epic_carson[259795]:             },
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "type": "block",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "vg_name": "ceph_vg0"
Jan 31 08:41:38 compute-0 epic_carson[259795]:         }
Jan 31 08:41:38 compute-0 epic_carson[259795]:     ],
Jan 31 08:41:38 compute-0 epic_carson[259795]:     "1": [
Jan 31 08:41:38 compute-0 epic_carson[259795]:         {
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "devices": [
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "/dev/loop4"
Jan 31 08:41:38 compute-0 epic_carson[259795]:             ],
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_name": "ceph_lv1",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_size": "21470642176",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "name": "ceph_lv1",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "tags": {
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cluster_name": "ceph",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.crush_device_class": "",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.encrypted": "0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.objectstore": "bluestore",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osd_id": "1",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.type": "block",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.vdo": "0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.with_tpm": "0"
Jan 31 08:41:38 compute-0 epic_carson[259795]:             },
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "type": "block",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "vg_name": "ceph_vg1"
Jan 31 08:41:38 compute-0 epic_carson[259795]:         }
Jan 31 08:41:38 compute-0 epic_carson[259795]:     ],
Jan 31 08:41:38 compute-0 epic_carson[259795]:     "2": [
Jan 31 08:41:38 compute-0 epic_carson[259795]:         {
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "devices": [
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "/dev/loop5"
Jan 31 08:41:38 compute-0 epic_carson[259795]:             ],
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_name": "ceph_lv2",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_size": "21470642176",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "name": "ceph_lv2",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "tags": {
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.cluster_name": "ceph",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.crush_device_class": "",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.encrypted": "0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.objectstore": "bluestore",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osd_id": "2",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.type": "block",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.vdo": "0",
Jan 31 08:41:38 compute-0 epic_carson[259795]:                 "ceph.with_tpm": "0"
Jan 31 08:41:38 compute-0 epic_carson[259795]:             },
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "type": "block",
Jan 31 08:41:38 compute-0 epic_carson[259795]:             "vg_name": "ceph_vg2"
Jan 31 08:41:38 compute-0 epic_carson[259795]:         }
Jan 31 08:41:38 compute-0 epic_carson[259795]:     ]
Jan 31 08:41:38 compute-0 epic_carson[259795]: }
Jan 31 08:41:38 compute-0 systemd[1]: libpod-abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700.scope: Deactivated successfully.
Jan 31 08:41:38 compute-0 podman[259779]: 2026-01-31 08:41:38.526750721 +0000 UTC m=+0.553574889 container died abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_carson, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:41:38 compute-0 sudo[259804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Jan 31 08:41:38 compute-0 sudo[259804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:38 compute-0 sudo[259804]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:38 compute-0 sudo[259839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Jan 31 08:41:38 compute-0 sudo[259839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:41:38 compute-0 sudo[259839]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ecac5297bf34561c352244c781a1437cbc97841ffcd0c195c5b4b335c3fd66c-merged.mount: Deactivated successfully.
Jan 31 08:41:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:41:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136846036' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:41:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:41:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136846036' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:41:39 compute-0 podman[259779]: 2026-01-31 08:41:39.659636054 +0000 UTC m=+1.686460222 container remove abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_carson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:41:39 compute-0 systemd[1]: libpod-conmon-abc73cbb131b883142eae5f81d01a6496c00f1db77b5567fc30dccc5fc925700.scope: Deactivated successfully.
Jan 31 08:41:39 compute-0 sudo[259703]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:39 compute-0 sudo[259865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:39 compute-0 sudo[259865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:39 compute-0 sudo[259865]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:39 compute-0 sudo[259890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:41:39 compute-0 sudo[259890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:39 compute-0 ceph-mon[75294]: pgmap v1485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/4136846036' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:41:39 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/4136846036' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:41:40 compute-0 podman[259927]: 2026-01-31 08:41:40.024607488 +0000 UTC m=+0.019820552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:40 compute-0 podman[259927]: 2026-01-31 08:41:40.127195751 +0000 UTC m=+0.122408895 container create cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:41:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:40 compute-0 systemd[1]: Started libpod-conmon-cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb.scope.
Jan 31 08:41:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:40 compute-0 podman[259927]: 2026-01-31 08:41:40.5395935 +0000 UTC m=+0.534806564 container init cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:41:40 compute-0 podman[259927]: 2026-01-31 08:41:40.544665846 +0000 UTC m=+0.539878890 container start cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shtern, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 08:41:40 compute-0 affectionate_shtern[259944]: 167 167
Jan 31 08:41:40 compute-0 systemd[1]: libpod-cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb.scope: Deactivated successfully.
Jan 31 08:41:40 compute-0 podman[259927]: 2026-01-31 08:41:40.632518094 +0000 UTC m=+0.627731138 container attach cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:41:40 compute-0 podman[259927]: 2026-01-31 08:41:40.632955894 +0000 UTC m=+0.628168948 container died cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-31e44470861107a9ffcc939253b0ef713dc61dc8365928249183745fe9623194-merged.mount: Deactivated successfully.
Jan 31 08:41:41 compute-0 ceph-mon[75294]: pgmap v1486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:41 compute-0 podman[259927]: 2026-01-31 08:41:41.322269545 +0000 UTC m=+1.317482589 container remove cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shtern, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:41:41 compute-0 systemd[1]: libpod-conmon-cf88e9a2ada6a081eec7a24be5e8e400abf3cc106daf8dcc4fd6dc56de80e5cb.scope: Deactivated successfully.
Jan 31 08:41:41 compute-0 podman[259967]: 2026-01-31 08:41:41.467853523 +0000 UTC m=+0.062689515 container create 8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rubin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:41:41 compute-0 podman[259967]: 2026-01-31 08:41:41.425018582 +0000 UTC m=+0.019854594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:41 compute-0 systemd[1]: Started libpod-conmon-8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63.scope.
Jan 31 08:41:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc9c8a74ca388de6d8a5cfe1abe970ab635f3a5d9b54dbfceae47289d4d97b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc9c8a74ca388de6d8a5cfe1abe970ab635f3a5d9b54dbfceae47289d4d97b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc9c8a74ca388de6d8a5cfe1abe970ab635f3a5d9b54dbfceae47289d4d97b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cc9c8a74ca388de6d8a5cfe1abe970ab635f3a5d9b54dbfceae47289d4d97b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 podman[259967]: 2026-01-31 08:41:41.588937563 +0000 UTC m=+0.183773585 container init 8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rubin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:41:41 compute-0 podman[259967]: 2026-01-31 08:41:41.597765142 +0000 UTC m=+0.192601134 container start 8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:41:41 compute-0 podman[259967]: 2026-01-31 08:41:41.61343569 +0000 UTC m=+0.208271712 container attach 8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:41:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:42 compute-0 lvm[260060]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:41:42 compute-0 lvm[260060]: VG ceph_vg0 finished
Jan 31 08:41:42 compute-0 lvm[260063]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:41:42 compute-0 lvm[260063]: VG ceph_vg1 finished
Jan 31 08:41:42 compute-0 lvm[260065]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:41:42 compute-0 lvm[260065]: VG ceph_vg2 finished
Jan 31 08:41:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:42 compute-0 distracted_rubin[259984]: {}
Jan 31 08:41:42 compute-0 systemd[1]: libpod-8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63.scope: Deactivated successfully.
Jan 31 08:41:42 compute-0 systemd[1]: libpod-8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63.scope: Consumed 1.163s CPU time.
Jan 31 08:41:42 compute-0 podman[259967]: 2026-01-31 08:41:42.402333479 +0000 UTC m=+0.997169491 container died 8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-11cc9c8a74ca388de6d8a5cfe1abe970ab635f3a5d9b54dbfceae47289d4d97b-merged.mount: Deactivated successfully.
Jan 31 08:41:43 compute-0 podman[259967]: 2026-01-31 08:41:43.125200752 +0000 UTC m=+1.720036744 container remove 8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rubin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:41:43 compute-0 sudo[259890]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:41:43 compute-0 systemd[1]: libpod-conmon-8b5139bc5251faf5dce4d1bd62d3e1011942a6f427040dac66984d938fda8d63.scope: Deactivated successfully.
Jan 31 08:41:43 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:41:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:41:43 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:41:43 compute-0 sudo[260080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:41:43 compute-0 sudo[260080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:43 compute-0 sudo[260080]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:43 compute-0 ceph-mon[75294]: pgmap v1487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:41:43 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:41:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:45 compute-0 ceph-mon[75294]: pgmap v1488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:41:46.985 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:41:46.986 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:41:46.986 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:47 compute-0 ceph-mon[75294]: pgmap v1489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:49 compute-0 ceph-mon[75294]: pgmap v1490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:41:50
Jan 31 08:41:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:41:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:41:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta', 'volumes']
Jan 31 08:41:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:41:51 compute-0 ceph-mon[75294]: pgmap v1491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:53 compute-0 ceph-mon[75294]: pgmap v1492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:55 compute-0 podman[260105]: 2026-01-31 08:41:55.184451673 +0000 UTC m=+0.050888028 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:41:55 compute-0 ceph-mon[75294]: pgmap v1493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:41:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:41:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:57 compute-0 ceph-mon[75294]: pgmap v1494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:59 compute-0 ceph-mon[75294]: pgmap v1495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:00 compute-0 sshd-session[258910]: Connection closed by 192.168.122.30 port 47178
Jan 31 08:42:00 compute-0 sshd-session[258907]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:42:00 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 31 08:42:00 compute-0 systemd-logind[810]: Session 53 logged out. Waiting for processes to exit.
Jan 31 08:42:00 compute-0 systemd-logind[810]: Removed session 53.
Jan 31 08:42:00 compute-0 sshd-session[259109]: Connection closed by 192.168.122.30 port 50426
Jan 31 08:42:00 compute-0 sshd-session[259106]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:42:00 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 08:42:00 compute-0 systemd-logind[810]: Session 54 logged out. Waiting for processes to exit.
Jan 31 08:42:00 compute-0 systemd-logind[810]: Removed session 54.
Jan 31 08:42:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:01 compute-0 sshd-session[259339]: Connection closed by 192.168.122.30 port 50434
Jan 31 08:42:01 compute-0 sshd-session[259336]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:42:01 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 31 08:42:01 compute-0 systemd-logind[810]: Session 55 logged out. Waiting for processes to exit.
Jan 31 08:42:01 compute-0 systemd-logind[810]: Removed session 55.
Jan 31 08:42:01 compute-0 podman[260124]: 2026-01-31 08:42:01.438694144 +0000 UTC m=+0.070466768 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:42:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:01 compute-0 ceph-mon[75294]: pgmap v1496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:03 compute-0 ceph-mon[75294]: pgmap v1497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:03 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 31 08:42:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:03.559204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:42:03 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 31 08:42:03 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848923559240, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1508, "num_deletes": 507, "total_data_size": 1945800, "memory_usage": 1977072, "flush_reason": "Manual Compaction"}
Jan 31 08:42:03 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 31 08:42:04 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848924119390, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1915624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28892, "largest_seqno": 30399, "table_properties": {"data_size": 1908983, "index_size": 3331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16675, "raw_average_key_size": 18, "raw_value_size": 1893713, "raw_average_value_size": 2130, "num_data_blocks": 149, "num_entries": 889, "num_filter_entries": 889, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848796, "oldest_key_time": 1769848796, "file_creation_time": 1769848923, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:42:04 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 560274 microseconds, and 3678 cpu microseconds.
Jan 31 08:42:04 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:42:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:04.119467) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1915624 bytes OK
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:04.119499) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:04.950906) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:04.950954) EVENT_LOG_v1 {"time_micros": 1769848924950944, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:04.950979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1938097, prev total WAL file size 1951565, number of live WAL files 2.
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:05.036102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1870KB)], [65(7680KB)]
Jan 31 08:42:05 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848925036142, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9780012, "oldest_snapshot_seqno": -1}
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5086 keys, 7924881 bytes, temperature: kUnknown
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848926219686, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7924881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7891454, "index_size": 19621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 128530, "raw_average_key_size": 25, "raw_value_size": 7799947, "raw_average_value_size": 1533, "num_data_blocks": 805, "num_entries": 5086, "num_filter_entries": 5086, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769848925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:42:06 compute-0 ceph-mon[75294]: pgmap v1498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.219888) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7924881 bytes
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.499745) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.3 rd, 6.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.5 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 6113, records dropped: 1027 output_compression: NoCompression
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.499773) EVENT_LOG_v1 {"time_micros": 1769848926499761, "job": 36, "event": "compaction_finished", "compaction_time_micros": 1183605, "compaction_time_cpu_micros": 13916, "output_level": 6, "num_output_files": 1, "total_output_size": 7924881, "num_input_records": 6113, "num_output_records": 5086, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848926500330, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848926501441, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:05.036012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.501472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.501477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.501478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.501480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:06 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:42:06.501482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:42:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:07 compute-0 ceph-mon[75294]: pgmap v1499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:09 compute-0 ceph-mon[75294]: pgmap v1500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:10 compute-0 ceph-mon[75294]: pgmap v1501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:13 compute-0 ceph-mon[75294]: pgmap v1502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.520 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.521 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.521 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.521 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.756 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.756 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.756 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.757 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:42:15 compute-0 nova_compute[240062]: 2026-01-31 08:42:15.757 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:15 compute-0 ceph-mon[75294]: pgmap v1503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:42:16 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051135338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.313 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.437 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.438 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5098MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.438 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.439 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:16 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2051135338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.918 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.919 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:42:16 compute-0 nova_compute[240062]: 2026-01-31 08:42:16.975 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing inventories for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.042 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating ProviderTree inventory for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.042 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.065 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing aggregate associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.083 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing trait associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.098 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:42:17 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110540175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.647 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.652 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.778 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.779 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:42:17 compute-0 nova_compute[240062]: 2026-01-31 08:42:17.780 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:18 compute-0 ceph-mon[75294]: pgmap v1504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:18 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/110540175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:18 compute-0 nova_compute[240062]: 2026-01-31 08:42:18.414 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:18 compute-0 nova_compute[240062]: 2026-01-31 08:42:18.414 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:42:18 compute-0 nova_compute[240062]: 2026-01-31 08:42:18.415 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:42:18 compute-0 nova_compute[240062]: 2026-01-31 08:42:18.515 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:42:18 compute-0 nova_compute[240062]: 2026-01-31 08:42:18.515 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:19 compute-0 nova_compute[240062]: 2026-01-31 08:42:19.251 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:19 compute-0 ceph-mon[75294]: pgmap v1505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:20 compute-0 nova_compute[240062]: 2026-01-31 08:42:20.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:21 compute-0 ceph-mon[75294]: pgmap v1506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:22 compute-0 nova_compute[240062]: 2026-01-31 08:42:22.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:23 compute-0 nova_compute[240062]: 2026-01-31 08:42:23.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:23 compute-0 ceph-mon[75294]: pgmap v1507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:25 compute-0 ceph-mon[75294]: pgmap v1508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:26 compute-0 podman[260194]: 2026-01-31 08:42:26.175503494 +0000 UTC m=+0.047440186 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:42:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:27 compute-0 ceph-mon[75294]: pgmap v1509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:29 compute-0 ceph-mon[75294]: pgmap v1510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:30 compute-0 nova_compute[240062]: 2026-01-31 08:42:30.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:31 compute-0 ceph-mon[75294]: pgmap v1511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:32 compute-0 podman[260214]: 2026-01-31 08:42:32.223510284 +0000 UTC m=+0.099683165 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:42:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:33 compute-0 ceph-mon[75294]: pgmap v1512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:35 compute-0 ceph-mon[75294]: pgmap v1513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:36 compute-0 ceph-mon[75294]: pgmap v1514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:39 compute-0 ceph-mon[75294]: pgmap v1515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:41 compute-0 ceph-mon[75294]: pgmap v1516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:43 compute-0 sudo[260240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:43 compute-0 sudo[260240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:43 compute-0 sudo[260240]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:43 compute-0 sudo[260265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:42:43 compute-0 sudo[260265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:44 compute-0 ceph-mon[75294]: pgmap v1517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:44 compute-0 podman[260332]: 2026-01-31 08:42:44.429391156 +0000 UTC m=+0.686010541 container exec 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:42:44 compute-0 podman[260352]: 2026-01-31 08:42:44.627958852 +0000 UTC m=+0.089443452 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:42:44 compute-0 podman[260332]: 2026-01-31 08:42:44.809969773 +0000 UTC m=+1.066589158 container exec_died 46fb178204c191926c8663457e64c2511ed04ccfd9a74581d5df1667ad75e0f0 (image=quay.io/ceph/ceph:v20, name=ceph-dc03f344-536f-5591-add9-31059f42637c-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:42:45 compute-0 ceph-mon[75294]: pgmap v1518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:45 compute-0 sudo[260265]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:45 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:46 compute-0 sudo[260518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:46 compute-0 sudo[260518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:46 compute-0 sudo[260518]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:46 compute-0 sudo[260543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:42:46 compute-0 sudo[260543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:46 compute-0 sudo[260543]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:42:46 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:46 compute-0 sudo[260600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:46 compute-0 sudo[260600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:46 compute-0 sudo[260600]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:46 compute-0 sudo[260625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:42:46 compute-0 sudo[260625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:42:46.985 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:42:46.986 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:42:46.987 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:47 compute-0 podman[260661]: 2026-01-31 08:42:47.134843257 +0000 UTC m=+0.019659444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:47 compute-0 ceph-mon[75294]: pgmap v1519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:47 compute-0 podman[260661]: 2026-01-31 08:42:47.337768934 +0000 UTC m=+0.222585111 container create 604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_banach, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:42:47 compute-0 systemd[1]: Started libpod-conmon-604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025.scope.
Jan 31 08:42:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:47 compute-0 podman[260661]: 2026-01-31 08:42:47.536908476 +0000 UTC m=+0.421724673 container init 604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:42:47 compute-0 podman[260661]: 2026-01-31 08:42:47.542809667 +0000 UTC m=+0.427625844 container start 604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_banach, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:47 compute-0 confident_banach[260678]: 167 167
Jan 31 08:42:47 compute-0 systemd[1]: libpod-604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025.scope: Deactivated successfully.
Jan 31 08:42:47 compute-0 podman[260661]: 2026-01-31 08:42:47.644518511 +0000 UTC m=+0.529334718 container attach 604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_banach, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:42:47 compute-0 podman[260661]: 2026-01-31 08:42:47.645174958 +0000 UTC m=+0.529991145 container died 604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_banach, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef65b147c5ecb514776ec3e02f433705816ed2da6d256cf7b661e32a02372d72-merged.mount: Deactivated successfully.
Jan 31 08:42:47 compute-0 podman[260661]: 2026-01-31 08:42:47.964468486 +0000 UTC m=+0.849284653 container remove 604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_banach, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:42:47 compute-0 systemd[1]: libpod-conmon-604b6c90ccdf5b09d1b71fbc3c65eda335015c611c33d1cbd751707a8373a025.scope: Deactivated successfully.
Jan 31 08:42:48 compute-0 podman[260702]: 2026-01-31 08:42:48.120235935 +0000 UTC m=+0.060317516 container create 2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:42:48 compute-0 systemd[1]: Started libpod-conmon-2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732.scope.
Jan 31 08:42:48 compute-0 podman[260702]: 2026-01-31 08:42:48.08490208 +0000 UTC m=+0.024983681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6479c9daeb75d3fb5b21be8726ed73dfaf0b3e01e4830c8f92e4b1e8227a2cbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6479c9daeb75d3fb5b21be8726ed73dfaf0b3e01e4830c8f92e4b1e8227a2cbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6479c9daeb75d3fb5b21be8726ed73dfaf0b3e01e4830c8f92e4b1e8227a2cbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6479c9daeb75d3fb5b21be8726ed73dfaf0b3e01e4830c8f92e4b1e8227a2cbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6479c9daeb75d3fb5b21be8726ed73dfaf0b3e01e4830c8f92e4b1e8227a2cbf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:48 compute-0 podman[260702]: 2026-01-31 08:42:48.236181125 +0000 UTC m=+0.176262736 container init 2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:42:48 compute-0 podman[260702]: 2026-01-31 08:42:48.243482321 +0000 UTC m=+0.183563902 container start 2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:42:48 compute-0 podman[260702]: 2026-01-31 08:42:48.257976013 +0000 UTC m=+0.198057614 container attach 2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:42:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:48 compute-0 infallible_margulis[260718]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:42:48 compute-0 infallible_margulis[260718]: --> All data devices are unavailable
Jan 31 08:42:48 compute-0 systemd[1]: libpod-2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732.scope: Deactivated successfully.
Jan 31 08:42:48 compute-0 podman[260702]: 2026-01-31 08:42:48.670107468 +0000 UTC m=+0.610189039 container died 2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6479c9daeb75d3fb5b21be8726ed73dfaf0b3e01e4830c8f92e4b1e8227a2cbf-merged.mount: Deactivated successfully.
Jan 31 08:42:49 compute-0 podman[260702]: 2026-01-31 08:42:49.100501301 +0000 UTC m=+1.040582882 container remove 2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_margulis, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:49 compute-0 sudo[260625]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:49 compute-0 systemd[1]: libpod-conmon-2e4587a05022d80932e0f4bfd015a17db94326a7a2c30b104da697444776c732.scope: Deactivated successfully.
Jan 31 08:42:49 compute-0 sudo[260751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:49 compute-0 sudo[260751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:49 compute-0 sudo[260751]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:49 compute-0 sudo[260776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:42:49 compute-0 sudo[260776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:49 compute-0 ceph-mon[75294]: pgmap v1520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:49 compute-0 podman[260813]: 2026-01-31 08:42:49.528138235 +0000 UTC m=+0.043232689 container create 552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:42:49 compute-0 systemd[1]: Started libpod-conmon-552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd.scope.
Jan 31 08:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:49 compute-0 podman[260813]: 2026-01-31 08:42:49.506544791 +0000 UTC m=+0.021639265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:49 compute-0 podman[260813]: 2026-01-31 08:42:49.615552253 +0000 UTC m=+0.130646727 container init 552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:42:49 compute-0 podman[260813]: 2026-01-31 08:42:49.62204787 +0000 UTC m=+0.137142314 container start 552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_volhard, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:42:49 compute-0 epic_volhard[260830]: 167 167
Jan 31 08:42:49 compute-0 systemd[1]: libpod-552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd.scope: Deactivated successfully.
Jan 31 08:42:49 compute-0 conmon[260830]: conmon 552e5d86b1b420f1b4ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd.scope/container/memory.events
Jan 31 08:42:49 compute-0 podman[260813]: 2026-01-31 08:42:49.635574645 +0000 UTC m=+0.150669089 container attach 552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_volhard, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:42:49 compute-0 podman[260813]: 2026-01-31 08:42:49.635960586 +0000 UTC m=+0.151055030 container died 552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_volhard, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-25adb75cd584e94684c8e692cd3833b08f5d80c7240a55fd5b263581ba1e44b1-merged.mount: Deactivated successfully.
Jan 31 08:42:49 compute-0 podman[260813]: 2026-01-31 08:42:49.728870075 +0000 UTC m=+0.243964519 container remove 552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:42:49 compute-0 systemd[1]: libpod-conmon-552e5d86b1b420f1b4ec411c6fd3317f8def341939ba4e56fffce21c91ba11dd.scope: Deactivated successfully.
Jan 31 08:42:49 compute-0 podman[260856]: 2026-01-31 08:42:49.856700119 +0000 UTC m=+0.039842771 container create 72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:42:49 compute-0 systemd[1]: Started libpod-conmon-72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9.scope.
Jan 31 08:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aad9e9e30df3a2838de07169df889d4970252d7ce6c932b7af197dadb9646/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aad9e9e30df3a2838de07169df889d4970252d7ce6c932b7af197dadb9646/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aad9e9e30df3a2838de07169df889d4970252d7ce6c932b7af197dadb9646/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aad9e9e30df3a2838de07169df889d4970252d7ce6c932b7af197dadb9646/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 podman[260856]: 2026-01-31 08:42:49.835335412 +0000 UTC m=+0.018478084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:49 compute-0 podman[260856]: 2026-01-31 08:42:49.940146177 +0000 UTC m=+0.123288859 container init 72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:42:49 compute-0 podman[260856]: 2026-01-31 08:42:49.945947925 +0000 UTC m=+0.129090577 container start 72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:42:49 compute-0 podman[260856]: 2026-01-31 08:42:49.95551287 +0000 UTC m=+0.138655542 container attach 72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:42:50 compute-0 lucid_darwin[260872]: {
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:     "0": [
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:         {
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "devices": [
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "/dev/loop3"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             ],
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_name": "ceph_lv0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_size": "21470642176",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "name": "ceph_lv0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "tags": {
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cluster_name": "ceph",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.crush_device_class": "",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.encrypted": "0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.objectstore": "bluestore",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osd_id": "0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.type": "block",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.vdo": "0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.with_tpm": "0"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             },
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "type": "block",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "vg_name": "ceph_vg0"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:         }
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:     ],
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:     "1": [
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:         {
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "devices": [
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "/dev/loop4"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             ],
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_name": "ceph_lv1",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_size": "21470642176",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "name": "ceph_lv1",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "tags": {
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cluster_name": "ceph",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.crush_device_class": "",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.encrypted": "0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.objectstore": "bluestore",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osd_id": "1",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.type": "block",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.vdo": "0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.with_tpm": "0"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             },
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "type": "block",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "vg_name": "ceph_vg1"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:         }
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:     ],
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:     "2": [
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:         {
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "devices": [
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "/dev/loop5"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             ],
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_name": "ceph_lv2",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_size": "21470642176",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "name": "ceph_lv2",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "tags": {
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.cluster_name": "ceph",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.crush_device_class": "",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.encrypted": "0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.objectstore": "bluestore",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osd_id": "2",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.type": "block",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.vdo": "0",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:                 "ceph.with_tpm": "0"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             },
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "type": "block",
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:             "vg_name": "ceph_vg2"
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:         }
Jan 31 08:42:50 compute-0 lucid_darwin[260872]:     ]
Jan 31 08:42:50 compute-0 lucid_darwin[260872]: }
Jan 31 08:42:50 compute-0 systemd[1]: libpod-72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9.scope: Deactivated successfully.
Jan 31 08:42:50 compute-0 podman[260856]: 2026-01-31 08:42:50.227915367 +0000 UTC m=+0.411058049 container died 72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa9aad9e9e30df3a2838de07169df889d4970252d7ce6c932b7af197dadb9646-merged.mount: Deactivated successfully.
Jan 31 08:42:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:50 compute-0 podman[260856]: 2026-01-31 08:42:50.349339326 +0000 UTC m=+0.532481978 container remove 72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:42:50 compute-0 systemd[1]: libpod-conmon-72b52635ee4e15ff2e1f6382c9b6f94c9b7151f284bf6e1f580e36ac2da5c0a9.scope: Deactivated successfully.
Jan 31 08:42:50 compute-0 sudo[260776]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:50 compute-0 sudo[260894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:50 compute-0 sudo[260894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:50 compute-0 sudo[260894]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:50 compute-0 sudo[260919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:42:50 compute-0 sudo[260919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:50 compute-0 podman[260957]: 2026-01-31 08:42:50.845871714 +0000 UTC m=+0.044502681 container create ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:42:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:42:50
Jan 31 08:42:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:42:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:42:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', 'images', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms']
Jan 31 08:42:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:42:50 compute-0 systemd[1]: Started libpod-conmon-ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b.scope.
Jan 31 08:42:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:50 compute-0 podman[260957]: 2026-01-31 08:42:50.825835261 +0000 UTC m=+0.024466258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:50 compute-0 podman[260957]: 2026-01-31 08:42:50.933368294 +0000 UTC m=+0.131999291 container init ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:42:50 compute-0 podman[260957]: 2026-01-31 08:42:50.941006681 +0000 UTC m=+0.139637648 container start ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:42:50 compute-0 systemd[1]: libpod-ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b.scope: Deactivated successfully.
Jan 31 08:42:50 compute-0 brave_pascal[260974]: 167 167
Jan 31 08:42:50 compute-0 conmon[260974]: conmon ff95f85ad3a8e0345b43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b.scope/container/memory.events
Jan 31 08:42:50 compute-0 podman[260957]: 2026-01-31 08:42:50.955015979 +0000 UTC m=+0.153646976 container attach ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:42:50 compute-0 podman[260957]: 2026-01-31 08:42:50.957107042 +0000 UTC m=+0.155738009 container died ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-766e4e01af6c01b65dc812bcc903497f80b7f972e0b68718e7d2b55d3714c187-merged.mount: Deactivated successfully.
Jan 31 08:42:51 compute-0 podman[260957]: 2026-01-31 08:42:51.051743217 +0000 UTC m=+0.250374184 container remove ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:42:51 compute-0 systemd[1]: libpod-conmon-ff95f85ad3a8e0345b439e77bb907de49a048296285eb4230151844b2ba3d52b.scope: Deactivated successfully.
Jan 31 08:42:51 compute-0 podman[260998]: 2026-01-31 08:42:51.200615399 +0000 UTC m=+0.055718918 container create aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:51 compute-0 systemd[1]: Started libpod-conmon-aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de.scope.
Jan 31 08:42:51 compute-0 podman[260998]: 2026-01-31 08:42:51.170715134 +0000 UTC m=+0.025818683 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130297e8d67948b3ad347297828ef1c005514835ed2cac19c4d459b9fc9f90a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130297e8d67948b3ad347297828ef1c005514835ed2cac19c4d459b9fc9f90a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130297e8d67948b3ad347297828ef1c005514835ed2cac19c4d459b9fc9f90a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130297e8d67948b3ad347297828ef1c005514835ed2cac19c4d459b9fc9f90a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:51 compute-0 podman[260998]: 2026-01-31 08:42:51.294527654 +0000 UTC m=+0.149631203 container init aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:42:51 compute-0 podman[260998]: 2026-01-31 08:42:51.299702787 +0000 UTC m=+0.154806306 container start aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 31 08:42:51 compute-0 podman[260998]: 2026-01-31 08:42:51.312171227 +0000 UTC m=+0.167274776 container attach aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:42:51 compute-0 ceph-mon[75294]: pgmap v1521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:51 compute-0 lvm[261093]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:42:51 compute-0 lvm[261093]: VG ceph_vg0 finished
Jan 31 08:42:51 compute-0 lvm[261094]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:42:51 compute-0 lvm[261094]: VG ceph_vg1 finished
Jan 31 08:42:52 compute-0 lvm[261096]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:42:52 compute-0 lvm[261096]: VG ceph_vg2 finished
Jan 31 08:42:52 compute-0 goofy_lehmann[261014]: {}
Jan 31 08:42:52 compute-0 systemd[1]: libpod-aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de.scope: Deactivated successfully.
Jan 31 08:42:52 compute-0 systemd[1]: libpod-aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de.scope: Consumed 1.207s CPU time.
Jan 31 08:42:52 compute-0 podman[260998]: 2026-01-31 08:42:52.148091516 +0000 UTC m=+1.003195035 container died aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-130297e8d67948b3ad347297828ef1c005514835ed2cac19c4d459b9fc9f90a6-merged.mount: Deactivated successfully.
Jan 31 08:42:52 compute-0 podman[260998]: 2026-01-31 08:42:52.24352329 +0000 UTC m=+1.098626809 container remove aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:52 compute-0 systemd[1]: libpod-conmon-aaed85668dcc0a78c48d9d4eca79d7e80c8a4032142be35cbd81439505eeb8de.scope: Deactivated successfully.
Jan 31 08:42:52 compute-0 sudo[260919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:42:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:42:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:52 compute-0 sudo[261111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:42:52 compute-0 sudo[261111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:52 compute-0 sudo[261111]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:53 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:42:53 compute-0 ceph-mon[75294]: pgmap v1522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:55 compute-0 ceph-mon[75294]: pgmap v1523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:42:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:42:56 compute-0 sshd-session[261136]: Invalid user sol from 80.94.92.182 port 42016
Jan 31 08:42:56 compute-0 podman[261138]: 2026-01-31 08:42:56.29799217 +0000 UTC m=+0.085485130 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 08:42:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:56 compute-0 sshd-session[261136]: Connection closed by invalid user sol 80.94.92.182 port 42016 [preauth]
Jan 31 08:42:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:57 compute-0 ceph-mon[75294]: pgmap v1524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:59 compute-0 ceph-mon[75294]: pgmap v1525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:01 compute-0 ceph-mon[75294]: pgmap v1526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:03 compute-0 podman[261157]: 2026-01-31 08:43:03.207739802 +0000 UTC m=+0.077947557 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 08:43:03 compute-0 ceph-mon[75294]: pgmap v1527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:05 compute-0 ceph-mon[75294]: pgmap v1528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:43:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:07 compute-0 ceph-mon[75294]: pgmap v1529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:09 compute-0 ceph-mon[75294]: pgmap v1530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:11 compute-0 ceph-mon[75294]: pgmap v1531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:13 compute-0 ceph-mon[75294]: pgmap v1532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:14 compute-0 nova_compute[240062]: 2026-01-31 08:43:14.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:15 compute-0 nova_compute[240062]: 2026-01-31 08:43:15.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:15 compute-0 nova_compute[240062]: 2026-01-31 08:43:15.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:43:15 compute-0 nova_compute[240062]: 2026-01-31 08:43:15.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:15 compute-0 ceph-mon[75294]: pgmap v1533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:16 compute-0 nova_compute[240062]: 2026-01-31 08:43:16.567 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:16 compute-0 nova_compute[240062]: 2026-01-31 08:43:16.567 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:16 compute-0 nova_compute[240062]: 2026-01-31 08:43:16.568 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:16 compute-0 nova_compute[240062]: 2026-01-31 08:43:16.568 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:43:16 compute-0 nova_compute[240062]: 2026-01-31 08:43:16.568 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:16 compute-0 ceph-mon[75294]: pgmap v1534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:43:17 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146285826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:17 compute-0 nova_compute[240062]: 2026-01-31 08:43:17.159 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:17 compute-0 nova_compute[240062]: 2026-01-31 08:43:17.378 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:43:17 compute-0 nova_compute[240062]: 2026-01-31 08:43:17.379 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5102MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:43:17 compute-0 nova_compute[240062]: 2026-01-31 08:43:17.379 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:17 compute-0 nova_compute[240062]: 2026-01-31 08:43:17.380 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:18 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2146285826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:18 compute-0 nova_compute[240062]: 2026-01-31 08:43:18.947 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:43:18 compute-0 nova_compute[240062]: 2026-01-31 08:43:18.947 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:43:18 compute-0 nova_compute[240062]: 2026-01-31 08:43:18.977 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:19 compute-0 ceph-mon[75294]: pgmap v1535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:43:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934044548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:19 compute-0 nova_compute[240062]: 2026-01-31 08:43:19.579 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:19 compute-0 nova_compute[240062]: 2026-01-31 08:43:19.585 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:43:20 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3934044548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:21 compute-0 ceph-mon[75294]: pgmap v1536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:22 compute-0 nova_compute[240062]: 2026-01-31 08:43:22.861 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:43:22 compute-0 nova_compute[240062]: 2026-01-31 08:43:22.863 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:43:22 compute-0 nova_compute[240062]: 2026-01-31 08:43:22.863 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:23 compute-0 ceph-mon[75294]: pgmap v1537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:25 compute-0 ceph-mon[75294]: pgmap v1538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:25 compute-0 nova_compute[240062]: 2026-01-31 08:43:25.865 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:25 compute-0 nova_compute[240062]: 2026-01-31 08:43:25.865 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:25 compute-0 nova_compute[240062]: 2026-01-31 08:43:25.865 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:43:25 compute-0 nova_compute[240062]: 2026-01-31 08:43:25.865 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:43:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:26 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:27 compute-0 ceph-mon[75294]: pgmap v1539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:27 compute-0 podman[261228]: 2026-01-31 08:43:27.17838166 +0000 UTC m=+0.049257242 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:43:28 compute-0 nova_compute[240062]: 2026-01-31 08:43:28.123 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:43:28 compute-0 nova_compute[240062]: 2026-01-31 08:43:28.123 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:28 compute-0 nova_compute[240062]: 2026-01-31 08:43:28.123 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:28 compute-0 nova_compute[240062]: 2026-01-31 08:43:28.124 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:28 compute-0 nova_compute[240062]: 2026-01-31 08:43:28.124 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:29 compute-0 ceph-mon[75294]: pgmap v1540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:31 compute-0 ceph-mon[75294]: pgmap v1541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:31 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:33 compute-0 ceph-mon[75294]: pgmap v1542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:34 compute-0 podman[261247]: 2026-01-31 08:43:34.245281657 +0000 UTC m=+0.117917771 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 08:43:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:35 compute-0 ceph-mon[75294]: pgmap v1543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:36 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:37 compute-0 ceph-mon[75294]: pgmap v1544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:39 compute-0 ceph-mon[75294]: pgmap v1545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:43:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4183727876' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:43:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:43:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4183727876' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:43:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/4183727876' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:43:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/4183727876' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:43:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:41 compute-0 ceph-mon[75294]: pgmap v1546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:41 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:43 compute-0 ceph-mon[75294]: pgmap v1547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:45 compute-0 ceph-mon[75294]: pgmap v1548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:46 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:43:46.987 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:43:46.988 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:43:46.988 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:47 compute-0 ceph-mon[75294]: pgmap v1549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:50 compute-0 ceph-mon[75294]: pgmap v1550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:43:50
Jan 31 08:43:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:43:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:43:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms']
Jan 31 08:43:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:43:51 compute-0 ceph-mon[75294]: pgmap v1551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:51 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:52 compute-0 sudo[261274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:52 compute-0 sudo[261274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:52 compute-0 sudo[261274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:52 compute-0 sudo[261299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:43:52 compute-0 sudo[261299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:52 compute-0 sudo[261299]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:43:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:43:52 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:43:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:43:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:43:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:43:53 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:43:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:43:53 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:43:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:43:53 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:53 compute-0 sudo[261354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:53 compute-0 sudo[261354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:53 compute-0 sudo[261354]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:53 compute-0 sudo[261379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:43:53 compute-0 sudo[261379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:53 compute-0 podman[261415]: 2026-01-31 08:43:53.533799576 +0000 UTC m=+0.019303885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:43:53 compute-0 podman[261415]: 2026-01-31 08:43:53.967802061 +0000 UTC m=+0.453306360 container create cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:43:54 compute-0 ceph-mon[75294]: pgmap v1552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:43:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:43:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:43:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:43:54 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:54 compute-0 systemd[1]: Started libpod-conmon-cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54.scope.
Jan 31 08:43:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:54 compute-0 podman[261415]: 2026-01-31 08:43:54.886237584 +0000 UTC m=+1.371741913 container init cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_montalcini, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:43:54 compute-0 podman[261415]: 2026-01-31 08:43:54.893309215 +0000 UTC m=+1.378813524 container start cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:43:54 compute-0 amazing_montalcini[261431]: 167 167
Jan 31 08:43:54 compute-0 systemd[1]: libpod-cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54.scope: Deactivated successfully.
Jan 31 08:43:55 compute-0 podman[261415]: 2026-01-31 08:43:55.271924803 +0000 UTC m=+1.757429122 container attach cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:43:55 compute-0 podman[261415]: 2026-01-31 08:43:55.273125764 +0000 UTC m=+1.758630063 container died cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:43:55 compute-0 ceph-mon[75294]: pgmap v1553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:43:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ed83371e1d298cc9cd72e21ec7f6cfb063425d70e455d08012dc90a1ad7416f-merged.mount: Deactivated successfully.
Jan 31 08:43:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:57 compute-0 podman[261415]: 2026-01-31 08:43:57.005081191 +0000 UTC m=+3.490585500 container remove cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_montalcini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:43:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:57 compute-0 systemd[1]: libpod-conmon-cbf075fa1c6acc72d63493879eb4fa993c9cf4b16c03589c7b7c886107f62d54.scope: Deactivated successfully.
Jan 31 08:43:57 compute-0 podman[261455]: 2026-01-31 08:43:57.110075481 +0000 UTC m=+0.022204890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:43:57 compute-0 podman[261455]: 2026-01-31 08:43:57.57349949 +0000 UTC m=+0.485628879 container create 62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_matsumoto, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:43:57 compute-0 ceph-mon[75294]: pgmap v1554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:58 compute-0 systemd[1]: Started libpod-conmon-62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4.scope.
Jan 31 08:43:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d403b66d905547dc620bc93c48f4d559376b17ac50b7f55e2f5cfd2dc331c37a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d403b66d905547dc620bc93c48f4d559376b17ac50b7f55e2f5cfd2dc331c37a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d403b66d905547dc620bc93c48f4d559376b17ac50b7f55e2f5cfd2dc331c37a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d403b66d905547dc620bc93c48f4d559376b17ac50b7f55e2f5cfd2dc331c37a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d403b66d905547dc620bc93c48f4d559376b17ac50b7f55e2f5cfd2dc331c37a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:58 compute-0 podman[261455]: 2026-01-31 08:43:58.281538973 +0000 UTC m=+1.193668382 container init 62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_matsumoto, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:43:58 compute-0 podman[261455]: 2026-01-31 08:43:58.287950378 +0000 UTC m=+1.200079757 container start 62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_matsumoto, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:43:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:58 compute-0 podman[261455]: 2026-01-31 08:43:58.545386291 +0000 UTC m=+1.457515680 container attach 62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:43:58 compute-0 podman[261469]: 2026-01-31 08:43:58.581780684 +0000 UTC m=+0.976631175 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:43:58 compute-0 nifty_matsumoto[261482]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:43:58 compute-0 nifty_matsumoto[261482]: --> All data devices are unavailable
Jan 31 08:43:58 compute-0 systemd[1]: libpod-62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4.scope: Deactivated successfully.
Jan 31 08:43:58 compute-0 podman[261455]: 2026-01-31 08:43:58.702764622 +0000 UTC m=+1.614894021 container died 62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:43:59 compute-0 ceph-mon[75294]: pgmap v1555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d403b66d905547dc620bc93c48f4d559376b17ac50b7f55e2f5cfd2dc331c37a-merged.mount: Deactivated successfully.
Jan 31 08:44:00 compute-0 podman[261455]: 2026-01-31 08:44:00.278291683 +0000 UTC m=+3.190421082 container remove 62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_matsumoto, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:44:00 compute-0 sudo[261379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:00 compute-0 sudo[261522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:00 compute-0 systemd[1]: libpod-conmon-62204c6da6fd38a8e252f5b2495527b23c842fb8c06e3490de0564744fdc89c4.scope: Deactivated successfully.
Jan 31 08:44:00 compute-0 sudo[261522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:00 compute-0 sudo[261522]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:00 compute-0 sudo[261547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:44:00 compute-0 sudo[261547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:00 compute-0 podman[261584]: 2026-01-31 08:44:00.648575358 +0000 UTC m=+0.020210739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:00 compute-0 podman[261584]: 2026-01-31 08:44:00.912501967 +0000 UTC m=+0.284137328 container create 5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chandrasekhar, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:44:01 compute-0 systemd[1]: Started libpod-conmon-5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508.scope.
Jan 31 08:44:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:01 compute-0 podman[261584]: 2026-01-31 08:44:01.47955158 +0000 UTC m=+0.851186961 container init 5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:01 compute-0 podman[261584]: 2026-01-31 08:44:01.487305679 +0000 UTC m=+0.858941040 container start 5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chandrasekhar, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:44:01 compute-0 cranky_chandrasekhar[261600]: 167 167
Jan 31 08:44:01 compute-0 systemd[1]: libpod-5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508.scope: Deactivated successfully.
Jan 31 08:44:01 compute-0 podman[261584]: 2026-01-31 08:44:01.655039945 +0000 UTC m=+1.026675336 container attach 5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chandrasekhar, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:44:01 compute-0 podman[261584]: 2026-01-31 08:44:01.655551028 +0000 UTC m=+1.027186399 container died 5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chandrasekhar, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:44:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:02 compute-0 ceph-mon[75294]: pgmap v1556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3358087f7d7d53d3951821e4d69b83c90e27fa00ef4dde69fb1c1972a95f48ff-merged.mount: Deactivated successfully.
Jan 31 08:44:03 compute-0 ceph-mon[75294]: pgmap v1557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:04 compute-0 podman[261584]: 2026-01-31 08:44:04.087398351 +0000 UTC m=+3.459033712 container remove 5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chandrasekhar, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:04 compute-0 systemd[1]: libpod-conmon-5c5f7d5f717c9b1221fd231fc46a2e28243ed5436835ad242b02413024333508.scope: Deactivated successfully.
Jan 31 08:44:04 compute-0 podman[261624]: 2026-01-31 08:44:04.276923275 +0000 UTC m=+0.109865474 container create ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:44:04 compute-0 podman[261624]: 2026-01-31 08:44:04.188149572 +0000 UTC m=+0.021091791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:04 compute-0 systemd[1]: Started libpod-conmon-ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a.scope.
Jan 31 08:44:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350f76b0e046fd30d670c7d570482cbd42858e67b05fd0f1daafb8f9cc3a25a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350f76b0e046fd30d670c7d570482cbd42858e67b05fd0f1daafb8f9cc3a25a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350f76b0e046fd30d670c7d570482cbd42858e67b05fd0f1daafb8f9cc3a25a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d350f76b0e046fd30d670c7d570482cbd42858e67b05fd0f1daafb8f9cc3a25a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 podman[261638]: 2026-01-31 08:44:04.415242058 +0000 UTC m=+0.100359302 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:44:04 compute-0 podman[261624]: 2026-01-31 08:44:04.534302777 +0000 UTC m=+0.367244996 container init ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:44:04 compute-0 podman[261624]: 2026-01-31 08:44:04.540016384 +0000 UTC m=+0.372958573 container start ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:44:04 compute-0 podman[261624]: 2026-01-31 08:44:04.608423476 +0000 UTC m=+0.441365705 container attach ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:44:04 compute-0 busy_hellman[261657]: {
Jan 31 08:44:04 compute-0 busy_hellman[261657]:     "0": [
Jan 31 08:44:04 compute-0 busy_hellman[261657]:         {
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "devices": [
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "/dev/loop3"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             ],
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_name": "ceph_lv0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_size": "21470642176",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "name": "ceph_lv0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "tags": {
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cluster_name": "ceph",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.crush_device_class": "",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.encrypted": "0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.objectstore": "bluestore",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osd_id": "0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.type": "block",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.vdo": "0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.with_tpm": "0"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             },
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "type": "block",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "vg_name": "ceph_vg0"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:         }
Jan 31 08:44:04 compute-0 busy_hellman[261657]:     ],
Jan 31 08:44:04 compute-0 busy_hellman[261657]:     "1": [
Jan 31 08:44:04 compute-0 busy_hellman[261657]:         {
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "devices": [
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "/dev/loop4"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             ],
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_name": "ceph_lv1",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_size": "21470642176",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "name": "ceph_lv1",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "tags": {
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cluster_name": "ceph",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.crush_device_class": "",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.encrypted": "0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.objectstore": "bluestore",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osd_id": "1",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.type": "block",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.vdo": "0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.with_tpm": "0"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             },
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "type": "block",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "vg_name": "ceph_vg1"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:         }
Jan 31 08:44:04 compute-0 busy_hellman[261657]:     ],
Jan 31 08:44:04 compute-0 busy_hellman[261657]:     "2": [
Jan 31 08:44:04 compute-0 busy_hellman[261657]:         {
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "devices": [
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "/dev/loop5"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             ],
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_name": "ceph_lv2",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_size": "21470642176",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "name": "ceph_lv2",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "tags": {
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.cluster_name": "ceph",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.crush_device_class": "",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.encrypted": "0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.objectstore": "bluestore",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osd_id": "2",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.type": "block",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.vdo": "0",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:                 "ceph.with_tpm": "0"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             },
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "type": "block",
Jan 31 08:44:04 compute-0 busy_hellman[261657]:             "vg_name": "ceph_vg2"
Jan 31 08:44:04 compute-0 busy_hellman[261657]:         }
Jan 31 08:44:04 compute-0 busy_hellman[261657]:     ]
Jan 31 08:44:04 compute-0 busy_hellman[261657]: }
Jan 31 08:44:04 compute-0 systemd[1]: libpod-ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a.scope: Deactivated successfully.
Jan 31 08:44:04 compute-0 podman[261673]: 2026-01-31 08:44:04.847780186 +0000 UTC m=+0.018315030 container died ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d350f76b0e046fd30d670c7d570482cbd42858e67b05fd0f1daafb8f9cc3a25a-merged.mount: Deactivated successfully.
Jan 31 08:44:05 compute-0 ceph-mon[75294]: pgmap v1558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:05 compute-0 podman[261673]: 2026-01-31 08:44:05.504933757 +0000 UTC m=+0.675468581 container remove ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:44:05 compute-0 systemd[1]: libpod-conmon-ac776dfb3675946abb5c55be080db1fb5b23144f6494e3cd98c0fc8fcb74917a.scope: Deactivated successfully.
Jan 31 08:44:05 compute-0 sudo[261547]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:05 compute-0 sudo[261686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:05 compute-0 sudo[261686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:05 compute-0 sudo[261686]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:05 compute-0 sudo[261711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:44:05 compute-0 sudo[261711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:05 compute-0 podman[261749]: 2026-01-31 08:44:05.972339848 +0000 UTC m=+0.067980461 container create 052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_spence, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:44:06 compute-0 podman[261749]: 2026-01-31 08:44:05.92320068 +0000 UTC m=+0.018841313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:06 compute-0 systemd[1]: Started libpod-conmon-052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be.scope.
Jan 31 08:44:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:06 compute-0 podman[261749]: 2026-01-31 08:44:06.313617009 +0000 UTC m=+0.409257652 container init 052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:44:06 compute-0 podman[261749]: 2026-01-31 08:44:06.319084489 +0000 UTC m=+0.414725102 container start 052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:44:06 compute-0 bold_spence[261765]: 167 167
Jan 31 08:44:06 compute-0 systemd[1]: libpod-052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be.scope: Deactivated successfully.
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:06 compute-0 podman[261749]: 2026-01-31 08:44:06.425754141 +0000 UTC m=+0.521394794 container attach 052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_spence, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:44:06 compute-0 podman[261749]: 2026-01-31 08:44:06.42612018 +0000 UTC m=+0.521760803 container died 052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_spence, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6640b90644d60e2900693f6a719e7067473c884da29af1293e1a8d8f680a125f-merged.mount: Deactivated successfully.
Jan 31 08:44:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:07 compute-0 ceph-mon[75294]: pgmap v1559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:07 compute-0 podman[261749]: 2026-01-31 08:44:07.647239435 +0000 UTC m=+1.742880048 container remove 052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_spence, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:44:07 compute-0 systemd[1]: libpod-conmon-052cd0f6fc5b7c0c0cfb2b265a3809d8a00f530af42e8634fff9fdc56ed653be.scope: Deactivated successfully.
Jan 31 08:44:07 compute-0 podman[261791]: 2026-01-31 08:44:07.846351785 +0000 UTC m=+0.102056505 container create a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:44:07 compute-0 podman[261791]: 2026-01-31 08:44:07.767562596 +0000 UTC m=+0.023267346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:07 compute-0 systemd[1]: Started libpod-conmon-a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11.scope.
Jan 31 08:44:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02747d88fe55b386ce7f475037912e19681c71ee72fa8aa807842a3da6179961/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02747d88fe55b386ce7f475037912e19681c71ee72fa8aa807842a3da6179961/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02747d88fe55b386ce7f475037912e19681c71ee72fa8aa807842a3da6179961/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02747d88fe55b386ce7f475037912e19681c71ee72fa8aa807842a3da6179961/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:08 compute-0 podman[261791]: 2026-01-31 08:44:08.093127885 +0000 UTC m=+0.348832635 container init a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:44:08 compute-0 podman[261791]: 2026-01-31 08:44:08.098240707 +0000 UTC m=+0.353945427 container start a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:44:08 compute-0 podman[261791]: 2026-01-31 08:44:08.211931378 +0000 UTC m=+0.467636118 container attach a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cannon, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:08 compute-0 lvm[261887]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:44:08 compute-0 lvm[261886]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:44:08 compute-0 lvm[261887]: VG ceph_vg1 finished
Jan 31 08:44:08 compute-0 lvm[261886]: VG ceph_vg0 finished
Jan 31 08:44:08 compute-0 lvm[261889]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:44:08 compute-0 lvm[261889]: VG ceph_vg2 finished
Jan 31 08:44:08 compute-0 priceless_cannon[261808]: {}
Jan 31 08:44:08 compute-0 systemd[1]: libpod-a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11.scope: Deactivated successfully.
Jan 31 08:44:08 compute-0 systemd[1]: libpod-a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11.scope: Consumed 1.143s CPU time.
Jan 31 08:44:08 compute-0 podman[261791]: 2026-01-31 08:44:08.878968592 +0000 UTC m=+1.134673312 container died a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-02747d88fe55b386ce7f475037912e19681c71ee72fa8aa807842a3da6179961-merged.mount: Deactivated successfully.
Jan 31 08:44:09 compute-0 ceph-mon[75294]: pgmap v1560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:09 compute-0 podman[261791]: 2026-01-31 08:44:09.936522017 +0000 UTC m=+2.192226737 container remove a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cannon, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:44:09 compute-0 systemd[1]: libpod-conmon-a62e6749c28008abcfb9cf1aa0d7973ee669615ae7374e62b8a57d8afb0cbf11.scope: Deactivated successfully.
Jan 31 08:44:09 compute-0 sudo[261711]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:44:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:44:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:44:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:44:10 compute-0 sudo[261904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:44:10 compute-0 sudo[261904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:10 compute-0 sudo[261904]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:44:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:44:11 compute-0 ceph-mon[75294]: pgmap v1561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:13 compute-0 ceph-mon[75294]: pgmap v1562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:15 compute-0 ceph-mon[75294]: pgmap v1563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.209 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.209 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.209 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.210 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.210 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:44:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060848531' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.787 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.966 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.967 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5108MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.967 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:15 compute-0 nova_compute[240062]: 2026-01-31 08:44:15.968 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.100 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.101 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.136 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:16 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1060848531' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:44:16 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094077196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.750 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.756 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.871 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.872 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:44:16 compute-0 nova_compute[240062]: 2026-01-31 08:44:16.873 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:17 compute-0 ceph-mon[75294]: pgmap v1564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:17 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4094077196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:17 compute-0 nova_compute[240062]: 2026-01-31 08:44:17.874 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:17 compute-0 nova_compute[240062]: 2026-01-31 08:44:17.875 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:17 compute-0 nova_compute[240062]: 2026-01-31 08:44:17.875 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:44:18 compute-0 nova_compute[240062]: 2026-01-31 08:44:18.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:19 compute-0 ceph-mon[75294]: pgmap v1565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:20 compute-0 nova_compute[240062]: 2026-01-31 08:44:20.149 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:20 compute-0 nova_compute[240062]: 2026-01-31 08:44:20.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:20 compute-0 nova_compute[240062]: 2026-01-31 08:44:20.154 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:44:20 compute-0 nova_compute[240062]: 2026-01-31 08:44:20.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:44:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:20 compute-0 nova_compute[240062]: 2026-01-31 08:44:20.792 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:44:21 compute-0 ceph-mon[75294]: pgmap v1566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:22 compute-0 nova_compute[240062]: 2026-01-31 08:44:22.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:22 compute-0 nova_compute[240062]: 2026-01-31 08:44:22.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:23 compute-0 ceph-mon[75294]: pgmap v1567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:25 compute-0 nova_compute[240062]: 2026-01-31 08:44:25.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:25 compute-0 ceph-mon[75294]: pgmap v1568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:27 compute-0 ceph-mon[75294]: pgmap v1569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:29 compute-0 podman[261973]: 2026-01-31 08:44:29.170795439 +0000 UTC m=+0.043347001 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:44:30 compute-0 ceph-mon[75294]: pgmap v1570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:31 compute-0 ceph-mon[75294]: pgmap v1571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:33 compute-0 ceph-mon[75294]: pgmap v1572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:34 compute-0 nova_compute[240062]: 2026-01-31 08:44:34.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:35 compute-0 ceph-mon[75294]: pgmap v1573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:35 compute-0 podman[261994]: 2026-01-31 08:44:35.215548254 +0000 UTC m=+0.087459251 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 08:44:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:37 compute-0 ceph-mon[75294]: pgmap v1574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:44:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2800079765' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:44:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:44:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2800079765' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:44:39 compute-0 ceph-mon[75294]: pgmap v1575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:41 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2800079765' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:44:41 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2800079765' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:44:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:42 compute-0 ceph-mon[75294]: pgmap v1576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:44 compute-0 ceph-mon[75294]: pgmap v1577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:45 compute-0 ceph-mon[75294]: pgmap v1578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:44:46.989 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:44:46.990 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:44:46.990 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:47 compute-0 ceph-mon[75294]: pgmap v1579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:49 compute-0 ceph-mon[75294]: pgmap v1580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:44:50
Jan 31 08:44:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:44:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:44:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Jan 31 08:44:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:44:51 compute-0 ceph-mon[75294]: pgmap v1581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:53 compute-0 ceph-mon[75294]: pgmap v1582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:54 compute-0 sshd-session[262019]: Invalid user sol from 193.32.162.145 port 39650
Jan 31 08:44:54 compute-0 sshd-session[262019]: Connection closed by invalid user sol 193.32.162.145 port 39650 [preauth]
Jan 31 08:44:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:55 compute-0 ceph-mon[75294]: pgmap v1583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:44:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:44:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:57 compute-0 ceph-mon[75294]: pgmap v1584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:59 compute-0 ceph-mon[75294]: pgmap v1585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:00 compute-0 podman[262021]: 2026-01-31 08:45:00.186663478 +0000 UTC m=+0.053955553 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:45:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:01 compute-0 ceph-mon[75294]: pgmap v1586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:03 compute-0 ceph-mon[75294]: pgmap v1587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:05 compute-0 ceph-mon[75294]: pgmap v1588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:06 compute-0 podman[262040]: 2026-01-31 08:45:06.227523105 +0000 UTC m=+0.098762790 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.230773842318446e-07 of space, bias 1.0, pg target 0.0002469232152695534 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.971249842136068e-06 of space, bias 4.0, pg target 0.0023654998105632815 quantized to 16 (current 16)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:45:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:07 compute-0 ceph-mon[75294]: pgmap v1589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:09 compute-0 ceph-mon[75294]: pgmap v1590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:10 compute-0 sudo[262067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:10 compute-0 sudo[262067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:10 compute-0 sudo[262067]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:10 compute-0 sudo[262092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:45:10 compute-0 sudo[262092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:10 compute-0 sudo[262092]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:45:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:45:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:45:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:45:10 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:11 compute-0 sudo[262147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:11 compute-0 sudo[262147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:11 compute-0 sudo[262147]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:11 compute-0 sudo[262172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:45:11 compute-0 sudo[262172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:11 compute-0 podman[262209]: 2026-01-31 08:45:11.427396972 +0000 UTC m=+0.109050274 container create e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 31 08:45:11 compute-0 podman[262209]: 2026-01-31 08:45:11.343081383 +0000 UTC m=+0.024734705 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:11 compute-0 systemd[1]: Started libpod-conmon-e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59.scope.
Jan 31 08:45:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:11 compute-0 podman[262209]: 2026-01-31 08:45:11.696909024 +0000 UTC m=+0.378562356 container init e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_snyder, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:45:11 compute-0 podman[262209]: 2026-01-31 08:45:11.703252587 +0000 UTC m=+0.384905889 container start e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_snyder, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:45:11 compute-0 crazy_snyder[262225]: 167 167
Jan 31 08:45:11 compute-0 systemd[1]: libpod-e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59.scope: Deactivated successfully.
Jan 31 08:45:11 compute-0 podman[262209]: 2026-01-31 08:45:11.835117865 +0000 UTC m=+0.516771197 container attach e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:11 compute-0 podman[262209]: 2026-01-31 08:45:11.836431418 +0000 UTC m=+0.518084720 container died e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:45:11 compute-0 ceph-mon[75294]: pgmap v1591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:45:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:45:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:45:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:45:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c2a4ad4a28f267fbc0cff53d35c90fdaf8b6cff3a805fc051f74211af40c7a5-merged.mount: Deactivated successfully.
Jan 31 08:45:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:12 compute-0 podman[262209]: 2026-01-31 08:45:12.705844095 +0000 UTC m=+1.387497397 container remove e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:45:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:12 compute-0 systemd[1]: libpod-conmon-e8f5ef6e6ce37611b72924d9945bdd7e3fc264615d31ce3c0e4858a032c11a59.scope: Deactivated successfully.
Jan 31 08:45:12 compute-0 podman[262248]: 2026-01-31 08:45:12.864631462 +0000 UTC m=+0.071200484 container create 21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:45:12 compute-0 podman[262248]: 2026-01-31 08:45:12.816491779 +0000 UTC m=+0.023060831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:12 compute-0 systemd[1]: Started libpod-conmon-21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa.scope.
Jan 31 08:45:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d287065e027ae1181e0e7c8b5a47decb266c33a1932150dfc90ad084582e4d94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d287065e027ae1181e0e7c8b5a47decb266c33a1932150dfc90ad084582e4d94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d287065e027ae1181e0e7c8b5a47decb266c33a1932150dfc90ad084582e4d94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d287065e027ae1181e0e7c8b5a47decb266c33a1932150dfc90ad084582e4d94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d287065e027ae1181e0e7c8b5a47decb266c33a1932150dfc90ad084582e4d94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:13 compute-0 podman[262248]: 2026-01-31 08:45:13.037936811 +0000 UTC m=+0.244505853 container init 21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:13 compute-0 podman[262248]: 2026-01-31 08:45:13.044273593 +0000 UTC m=+0.250842615 container start 21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:13 compute-0 podman[262248]: 2026-01-31 08:45:13.064750787 +0000 UTC m=+0.271319809 container attach 21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:13 compute-0 stupefied_matsumoto[262264]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:45:13 compute-0 stupefied_matsumoto[262264]: --> All data devices are unavailable
Jan 31 08:45:13 compute-0 systemd[1]: libpod-21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa.scope: Deactivated successfully.
Jan 31 08:45:13 compute-0 podman[262284]: 2026-01-31 08:45:13.516424506 +0000 UTC m=+0.024798757 container died 21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d287065e027ae1181e0e7c8b5a47decb266c33a1932150dfc90ad084582e4d94-merged.mount: Deactivated successfully.
Jan 31 08:45:13 compute-0 podman[262284]: 2026-01-31 08:45:13.806149566 +0000 UTC m=+0.314523787 container remove 21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:45:13 compute-0 systemd[1]: libpod-conmon-21d601f158b4f08b5f69a0fab8be6af451bb6f223e0a599659ea5be6eedfbefa.scope: Deactivated successfully.
Jan 31 08:45:13 compute-0 sudo[262172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:13 compute-0 sudo[262298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:13 compute-0 sudo[262298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:13 compute-0 sudo[262298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:13 compute-0 sudo[262323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:45:13 compute-0 sudo[262323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:14 compute-0 ceph-mon[75294]: pgmap v1592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:14 compute-0 podman[262360]: 2026-01-31 08:45:14.335230367 +0000 UTC m=+0.076971722 container create 5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:45:14 compute-0 podman[262360]: 2026-01-31 08:45:14.289165237 +0000 UTC m=+0.030906612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:14 compute-0 systemd[1]: Started libpod-conmon-5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329.scope.
Jan 31 08:45:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:14 compute-0 podman[262360]: 2026-01-31 08:45:14.543988833 +0000 UTC m=+0.285730208 container init 5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:45:14 compute-0 podman[262360]: 2026-01-31 08:45:14.550734326 +0000 UTC m=+0.292475681 container start 5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:14 compute-0 goofy_ramanujan[262376]: 167 167
Jan 31 08:45:14 compute-0 systemd[1]: libpod-5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329.scope: Deactivated successfully.
Jan 31 08:45:14 compute-0 podman[262360]: 2026-01-31 08:45:14.589885399 +0000 UTC m=+0.331626764 container attach 5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:45:14 compute-0 podman[262360]: 2026-01-31 08:45:14.590273439 +0000 UTC m=+0.332014794 container died 5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e410303691df83109f8d3cebf4278b0546a6d974982266ce3eaaff5541e940ea-merged.mount: Deactivated successfully.
Jan 31 08:45:14 compute-0 podman[262360]: 2026-01-31 08:45:14.94096295 +0000 UTC m=+0.682704315 container remove 5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:45:14 compute-0 systemd[1]: libpod-conmon-5df726fe44a96223d496bdf581562fb7df8aee16931e3c3754f9322e2ef50329.scope: Deactivated successfully.
Jan 31 08:45:15 compute-0 podman[262401]: 2026-01-31 08:45:15.098088445 +0000 UTC m=+0.060823709 container create e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_goldberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:45:15 compute-0 podman[262401]: 2026-01-31 08:45:15.061354293 +0000 UTC m=+0.024089577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:15 compute-0 systemd[1]: Started libpod-conmon-e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45.scope.
Jan 31 08:45:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d00ef8cdaead85de932f64d481f406b38029a122395f6acadfd6e73357bbbe1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d00ef8cdaead85de932f64d481f406b38029a122395f6acadfd6e73357bbbe1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d00ef8cdaead85de932f64d481f406b38029a122395f6acadfd6e73357bbbe1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d00ef8cdaead85de932f64d481f406b38029a122395f6acadfd6e73357bbbe1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:15 compute-0 podman[262401]: 2026-01-31 08:45:15.241756284 +0000 UTC m=+0.204491568 container init e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_goldberg, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:45:15 compute-0 podman[262401]: 2026-01-31 08:45:15.248940888 +0000 UTC m=+0.211676152 container start e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:45:15 compute-0 podman[262401]: 2026-01-31 08:45:15.269400502 +0000 UTC m=+0.232135796 container attach e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_goldberg, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]: {
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:     "0": [
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:         {
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "devices": [
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "/dev/loop3"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             ],
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_name": "ceph_lv0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_size": "21470642176",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "name": "ceph_lv0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "tags": {
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cluster_name": "ceph",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.crush_device_class": "",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.encrypted": "0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.objectstore": "bluestore",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osd_id": "0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.type": "block",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.vdo": "0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.with_tpm": "0"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             },
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "type": "block",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "vg_name": "ceph_vg0"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:         }
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:     ],
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:     "1": [
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:         {
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "devices": [
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "/dev/loop4"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             ],
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_name": "ceph_lv1",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_size": "21470642176",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "name": "ceph_lv1",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "tags": {
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cluster_name": "ceph",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.crush_device_class": "",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.encrypted": "0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.objectstore": "bluestore",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osd_id": "1",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.type": "block",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.vdo": "0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.with_tpm": "0"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             },
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "type": "block",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "vg_name": "ceph_vg1"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:         }
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:     ],
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:     "2": [
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:         {
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "devices": [
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "/dev/loop5"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             ],
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_name": "ceph_lv2",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_size": "21470642176",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "name": "ceph_lv2",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "tags": {
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.cluster_name": "ceph",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.crush_device_class": "",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.encrypted": "0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.objectstore": "bluestore",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osd_id": "2",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.type": "block",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.vdo": "0",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:                 "ceph.with_tpm": "0"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             },
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "type": "block",
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:             "vg_name": "ceph_vg2"
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:         }
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]:     ]
Jan 31 08:45:15 compute-0 upbeat_goldberg[262417]: }
Jan 31 08:45:15 compute-0 systemd[1]: libpod-e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45.scope: Deactivated successfully.
Jan 31 08:45:15 compute-0 podman[262401]: 2026-01-31 08:45:15.559777029 +0000 UTC m=+0.522512313 container died e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d00ef8cdaead85de932f64d481f406b38029a122395f6acadfd6e73357bbbe1-merged.mount: Deactivated successfully.
Jan 31 08:45:15 compute-0 podman[262401]: 2026-01-31 08:45:15.70854199 +0000 UTC m=+0.671277254 container remove e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_goldberg, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:45:15 compute-0 systemd[1]: libpod-conmon-e1f1d62ab0784db84821382d6198641896b8854d25fa2220d43aa3b617bb1e45.scope: Deactivated successfully.
Jan 31 08:45:15 compute-0 sudo[262323]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:15 compute-0 sudo[262438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:15 compute-0 sudo[262438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:15 compute-0 sudo[262438]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:15 compute-0 sudo[262463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:45:15 compute-0 sudo[262463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:16 compute-0 ceph-mon[75294]: pgmap v1593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:16 compute-0 nova_compute[240062]: 2026-01-31 08:45:16.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:16 compute-0 podman[262499]: 2026-01-31 08:45:16.202628434 +0000 UTC m=+0.088219680 container create 57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:45:16 compute-0 podman[262499]: 2026-01-31 08:45:16.135547536 +0000 UTC m=+0.021138802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:16 compute-0 systemd[1]: Started libpod-conmon-57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d.scope.
Jan 31 08:45:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:16 compute-0 podman[262499]: 2026-01-31 08:45:16.358420864 +0000 UTC m=+0.244012120 container init 57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:45:16 compute-0 podman[262499]: 2026-01-31 08:45:16.365186797 +0000 UTC m=+0.250778033 container start 57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:16 compute-0 heuristic_bhabha[262515]: 167 167
Jan 31 08:45:16 compute-0 systemd[1]: libpod-57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d.scope: Deactivated successfully.
Jan 31 08:45:16 compute-0 podman[262499]: 2026-01-31 08:45:16.396782456 +0000 UTC m=+0.282373722 container attach 57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhabha, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:45:16 compute-0 podman[262499]: 2026-01-31 08:45:16.397223738 +0000 UTC m=+0.282814984 container died 57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhabha, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:45:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-834c6e0a21d16eece48f9633b6247ae5c81654f9a5553c13aa8cfb4342a37202-merged.mount: Deactivated successfully.
Jan 31 08:45:16 compute-0 podman[262499]: 2026-01-31 08:45:16.523189113 +0000 UTC m=+0.408780349 container remove 57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:45:16 compute-0 systemd[1]: libpod-conmon-57c804501b1c11df4f7ae479950a24a9b14c0456575d7508f2c16bed8d11f54d.scope: Deactivated successfully.
Jan 31 08:45:16 compute-0 podman[262542]: 2026-01-31 08:45:16.685394328 +0000 UTC m=+0.070270501 container create 20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_black, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:45:16 compute-0 podman[262542]: 2026-01-31 08:45:16.639424771 +0000 UTC m=+0.024300944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:16 compute-0 systemd[1]: Started libpod-conmon-20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3.scope.
Jan 31 08:45:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf9d5cd28578d4569977dbdb7a0c01179f420b7788ce267f70b22097c7beb95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf9d5cd28578d4569977dbdb7a0c01179f420b7788ce267f70b22097c7beb95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf9d5cd28578d4569977dbdb7a0c01179f420b7788ce267f70b22097c7beb95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf9d5cd28578d4569977dbdb7a0c01179f420b7788ce267f70b22097c7beb95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:16 compute-0 podman[262542]: 2026-01-31 08:45:16.798126866 +0000 UTC m=+0.183003029 container init 20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_black, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:45:16 compute-0 podman[262542]: 2026-01-31 08:45:16.80377575 +0000 UTC m=+0.188651903 container start 20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_black, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:45:16 compute-0 podman[262542]: 2026-01-31 08:45:16.858525652 +0000 UTC m=+0.243401865 container attach 20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_black, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:45:17 compute-0 nova_compute[240062]: 2026-01-31 08:45:17.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:17 compute-0 nova_compute[240062]: 2026-01-31 08:45:17.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:45:17 compute-0 nova_compute[240062]: 2026-01-31 08:45:17.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:17 compute-0 lvm[262633]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:45:17 compute-0 lvm[262633]: VG ceph_vg0 finished
Jan 31 08:45:17 compute-0 lvm[262636]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:45:17 compute-0 lvm[262636]: VG ceph_vg1 finished
Jan 31 08:45:17 compute-0 lvm[262638]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:45:17 compute-0 lvm[262638]: VG ceph_vg2 finished
Jan 31 08:45:17 compute-0 loving_black[262557]: {}
Jan 31 08:45:17 compute-0 systemd[1]: libpod-20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3.scope: Deactivated successfully.
Jan 31 08:45:17 compute-0 systemd[1]: libpod-20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3.scope: Consumed 1.257s CPU time.
Jan 31 08:45:17 compute-0 podman[262542]: 2026-01-31 08:45:17.684305442 +0000 UTC m=+1.069181605 container died 20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbf9d5cd28578d4569977dbdb7a0c01179f420b7788ce267f70b22097c7beb95-merged.mount: Deactivated successfully.
Jan 31 08:45:17 compute-0 podman[262542]: 2026-01-31 08:45:17.877812928 +0000 UTC m=+1.262689091 container remove 20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_black, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:17 compute-0 systemd[1]: libpod-conmon-20f0faf6a974e32b66426c476468f1b213634ce817f1b7c9adcf322c6f6e8ec3.scope: Deactivated successfully.
Jan 31 08:45:17 compute-0 sudo[262463]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:45:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:45:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:45:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:45:18 compute-0 ceph-mon[75294]: pgmap v1594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:18 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:45:18 compute-0 sudo[262653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:45:18 compute-0 sudo[262653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:18 compute-0 sudo[262653]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:18 compute-0 nova_compute[240062]: 2026-01-31 08:45:18.234 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:18 compute-0 nova_compute[240062]: 2026-01-31 08:45:18.235 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:18 compute-0 nova_compute[240062]: 2026-01-31 08:45:18.235 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:18 compute-0 nova_compute[240062]: 2026-01-31 08:45:18.235 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:45:18 compute-0 nova_compute[240062]: 2026-01-31 08:45:18.235 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:45:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555847495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:18 compute-0 nova_compute[240062]: 2026-01-31 08:45:18.821 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:19 compute-0 nova_compute[240062]: 2026-01-31 08:45:19.019 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:45:19 compute-0 nova_compute[240062]: 2026-01-31 08:45:19.021 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5076MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:45:19 compute-0 nova_compute[240062]: 2026-01-31 08:45:19.021 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:19 compute-0 nova_compute[240062]: 2026-01-31 08:45:19.022 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:45:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3555847495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:20 compute-0 ceph-mon[75294]: pgmap v1595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:21 compute-0 nova_compute[240062]: 2026-01-31 08:45:21.178 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:45:21 compute-0 nova_compute[240062]: 2026-01-31 08:45:21.179 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:45:21 compute-0 nova_compute[240062]: 2026-01-31 08:45:21.193 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:45:21 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032377313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:21 compute-0 nova_compute[240062]: 2026-01-31 08:45:21.803 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:21 compute-0 nova_compute[240062]: 2026-01-31 08:45:21.808 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:45:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:22 compute-0 ceph-mon[75294]: pgmap v1596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:22 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3032377313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:23 compute-0 ceph-mon[75294]: pgmap v1597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:26 compute-0 ceph-mon[75294]: pgmap v1598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:27 compute-0 nova_compute[240062]: 2026-01-31 08:45:27.451 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:45:27 compute-0 nova_compute[240062]: 2026-01-31 08:45:27.452 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:45:27 compute-0 nova_compute[240062]: 2026-01-31 08:45:27.452 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 8.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:28 compute-0 ceph-mon[75294]: pgmap v1599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:29 compute-0 ceph-mon[75294]: pgmap v1600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.448 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.449 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.449 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.449 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.940 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.941 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.941 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.942 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:30 compute-0 nova_compute[240062]: 2026-01-31 08:45:30.942 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:31 compute-0 podman[262722]: 2026-01-31 08:45:31.20540229 +0000 UTC m=+0.079249381 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:45:31 compute-0 ceph-mon[75294]: pgmap v1601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:34 compute-0 ceph-mon[75294]: pgmap v1602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:36 compute-0 ceph-mon[75294]: pgmap v1603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:37 compute-0 podman[262741]: 2026-01-31 08:45:37.211554188 +0000 UTC m=+0.086245639 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:45:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:37 compute-0 ceph-mon[75294]: pgmap v1604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:45:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516029303' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:45:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:45:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516029303' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:45:40 compute-0 ceph-mon[75294]: pgmap v1605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3516029303' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:45:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3516029303' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:45:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:45:41 compute-0 ceph-mon[75294]: pgmap v1606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:45:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:45:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 31 08:45:43 compute-0 ceph-mon[75294]: pgmap v1607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:45:44 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 31 08:45:44 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 31 08:45:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 820 KiB/s wr, 26 op/s
Jan 31 08:45:45 compute-0 ceph-mon[75294]: osdmap e152: 3 total, 3 up, 3 in
Jan 31 08:45:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 820 KiB/s wr, 26 op/s
Jan 31 08:45:46 compute-0 ceph-mon[75294]: pgmap v1609: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 820 KiB/s wr, 26 op/s
Jan 31 08:45:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:45:46.990 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:45:46.990 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:45:46.990 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:47 compute-0 ceph-mon[75294]: pgmap v1610: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 820 KiB/s wr, 26 op/s
Jan 31 08:45:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 31 08:45:50 compute-0 ceph-mon[75294]: pgmap v1611: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 31 08:45:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 28 op/s
Jan 31 08:45:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:45:50
Jan 31 08:45:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:45:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:45:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'vms', 'images', 'default.rgw.log', 'volumes', '.mgr']
Jan 31 08:45:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:45:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 28 op/s
Jan 31 08:45:52 compute-0 ceph-mon[75294]: pgmap v1612: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 28 op/s
Jan 31 08:45:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:53 compute-0 ceph-mon[75294]: pgmap v1613: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 28 op/s
Jan 31 08:45:53 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 31 08:45:53 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:53.943974) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:45:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 31 08:45:53 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849153944007, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 3492841, "memory_usage": 3555536, "flush_reason": "Manual Compaction"}
Jan 31 08:45:53 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849154061981, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3426260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30400, "largest_seqno": 32447, "table_properties": {"data_size": 3416837, "index_size": 5981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18631, "raw_average_key_size": 20, "raw_value_size": 3398181, "raw_average_value_size": 3661, "num_data_blocks": 265, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848925, "oldest_key_time": 1769848925, "file_creation_time": 1769849153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 118058 microseconds, and 5013 cpu microseconds.
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.062029) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3426260 bytes OK
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.062048) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.245905) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.245953) EVENT_LOG_v1 {"time_micros": 1769849154245943, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.245979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3484290, prev total WAL file size 3484290, number of live WAL files 2.
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.246698) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3345KB)], [68(7739KB)]
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849154246723, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11351141, "oldest_snapshot_seqno": -1}
Jan 31 08:45:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 MiB/s wr, 4 op/s
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5496 keys, 9551760 bytes, temperature: kUnknown
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849154502170, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9551760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9513988, "index_size": 22942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 137626, "raw_average_key_size": 25, "raw_value_size": 9413617, "raw_average_value_size": 1712, "num_data_blocks": 943, "num_entries": 5496, "num_filter_entries": 5496, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769849154, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.502852) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9551760 bytes
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.527260) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 44.4 rd, 37.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6014, records dropped: 518 output_compression: NoCompression
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.527302) EVENT_LOG_v1 {"time_micros": 1769849154527286, "job": 38, "event": "compaction_finished", "compaction_time_micros": 255523, "compaction_time_cpu_micros": 15098, "output_level": 6, "num_output_files": 1, "total_output_size": 9551760, "num_input_records": 6014, "num_output_records": 5496, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849154528599, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849154529844, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.246587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.529878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.529884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.529887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.529889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:45:54 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:45:54.529891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:45:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:45:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:45:56 compute-0 ceph-mon[75294]: pgmap v1614: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 MiB/s wr, 4 op/s
Jan 31 08:45:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Jan 31 08:45:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:58 compute-0 ceph-mon[75294]: pgmap v1615: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Jan 31 08:45:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Jan 31 08:46:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Jan 31 08:46:00 compute-0 ceph-mon[75294]: pgmap v1616: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Jan 31 08:46:01 compute-0 ceph-mon[75294]: pgmap v1617: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Jan 31 08:46:01 compute-0 anacron[99328]: Job `cron.weekly' started
Jan 31 08:46:01 compute-0 anacron[99328]: Job `cron.weekly' terminated
Jan 31 08:46:02 compute-0 podman[262772]: 2026-01-31 08:46:02.206754887 +0000 UTC m=+0.078151693 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:46:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:03 compute-0 ceph-mon[75294]: pgmap v1618: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:06 compute-0 ceph-mon[75294]: pgmap v1619: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033373541452226877 of space, bias 1.0, pg target 0.10012062435668063 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9712964173615222e-06 of space, bias 4.0, pg target 0.002365555700833827 quantized to 16 (current 16)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:46:07 compute-0 ceph-mon[75294]: pgmap v1620: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:08 compute-0 podman[262791]: 2026-01-31 08:46:08.223172598 +0000 UTC m=+0.097295652 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:46:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:09 compute-0 ceph-mon[75294]: pgmap v1621: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:11 compute-0 ceph-mon[75294]: pgmap v1622: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:14 compute-0 ceph-mon[75294]: pgmap v1623: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:16 compute-0 ceph-mon[75294]: pgmap v1624: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.155 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:18 compute-0 sudo[262817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:18 compute-0 sudo[262817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:18 compute-0 sudo[262817]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:18 compute-0 sudo[262842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:46:18 compute-0 sudo[262842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:18 compute-0 ceph-mon[75294]: pgmap v1625: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.787 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.788 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.788 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.789 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:46:18 compute-0 nova_compute[240062]: 2026-01-31 08:46:18.789 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:18 compute-0 sudo[262842]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:46:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:46:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:46:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:46:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:46:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:46:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:46:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:46:18 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:46:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:46:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:19 compute-0 sudo[262917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:19 compute-0 sudo[262917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:19 compute-0 sudo[262917]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:19 compute-0 sudo[262942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:46:19 compute-0 sudo[262942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:19 compute-0 podman[262978]: 2026-01-31 08:46:19.451304922 +0000 UTC m=+0.114173106 container create f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kepler, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:46:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:46:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075941107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:19 compute-0 podman[262978]: 2026-01-31 08:46:19.363897973 +0000 UTC m=+0.026766187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:46:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:46:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:46:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:46:19 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:19 compute-0 nova_compute[240062]: 2026-01-31 08:46:19.481 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:19 compute-0 systemd[1]: Started libpod-conmon-f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400.scope.
Jan 31 08:46:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:19 compute-0 podman[262978]: 2026-01-31 08:46:19.668592147 +0000 UTC m=+0.331460351 container init f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:46:19 compute-0 podman[262978]: 2026-01-31 08:46:19.675376761 +0000 UTC m=+0.338244945 container start f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:46:19 compute-0 zen_kepler[262997]: 167 167
Jan 31 08:46:19 compute-0 systemd[1]: libpod-f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400.scope: Deactivated successfully.
Jan 31 08:46:19 compute-0 podman[262978]: 2026-01-31 08:46:19.69135686 +0000 UTC m=+0.354225074 container attach f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:46:19 compute-0 podman[262978]: 2026-01-31 08:46:19.692058338 +0000 UTC m=+0.354926532 container died f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kepler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:46:19 compute-0 nova_compute[240062]: 2026-01-31 08:46:19.845 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:46:19 compute-0 nova_compute[240062]: 2026-01-31 08:46:19.848 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5093MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:46:19 compute-0 nova_compute[240062]: 2026-01-31 08:46:19.849 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:19 compute-0 nova_compute[240062]: 2026-01-31 08:46:19.849 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e80978dacd4b9f87cbf50fbcbe63068f38849e0259d54ef00d5703c99c818a20-merged.mount: Deactivated successfully.
Jan 31 08:46:20 compute-0 podman[262978]: 2026-01-31 08:46:20.157705004 +0000 UTC m=+0.820573188 container remove f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 31 08:46:20 compute-0 systemd[1]: libpod-conmon-f6af2966fea926df222a75c70a9c6bb4fb663e1f216a53876c817d92611fd400.scope: Deactivated successfully.
Jan 31 08:46:20 compute-0 podman[263021]: 2026-01-31 08:46:20.343461562 +0000 UTC m=+0.082115824 container create 15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_solomon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:46:20 compute-0 podman[263021]: 2026-01-31 08:46:20.28680145 +0000 UTC m=+0.025455742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:20 compute-0 systemd[1]: Started libpod-conmon-15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1.scope.
Jan 31 08:46:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad64a80bac5346c54d0288543b987ba376fb63278f04dbdbdffd61ba44e796d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad64a80bac5346c54d0288543b987ba376fb63278f04dbdbdffd61ba44e796d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad64a80bac5346c54d0288543b987ba376fb63278f04dbdbdffd61ba44e796d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad64a80bac5346c54d0288543b987ba376fb63278f04dbdbdffd61ba44e796d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad64a80bac5346c54d0288543b987ba376fb63278f04dbdbdffd61ba44e796d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:20 compute-0 ceph-mon[75294]: pgmap v1626: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:20 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1075941107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:20 compute-0 podman[263021]: 2026-01-31 08:46:20.526504919 +0000 UTC m=+0.265159191 container init 15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:46:20 compute-0 podman[263021]: 2026-01-31 08:46:20.533435607 +0000 UTC m=+0.272089869 container start 15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:46:20 compute-0 podman[263021]: 2026-01-31 08:46:20.642106361 +0000 UTC m=+0.380760633 container attach 15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_solomon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:46:20 compute-0 nova_compute[240062]: 2026-01-31 08:46:20.771 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:46:20 compute-0 nova_compute[240062]: 2026-01-31 08:46:20.772 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:46:20 compute-0 nova_compute[240062]: 2026-01-31 08:46:20.796 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:20 compute-0 clever_solomon[263037]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:46:20 compute-0 clever_solomon[263037]: --> All data devices are unavailable
Jan 31 08:46:20 compute-0 systemd[1]: libpod-15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1.scope: Deactivated successfully.
Jan 31 08:46:20 compute-0 podman[263021]: 2026-01-31 08:46:20.973351385 +0000 UTC m=+0.712005647 container died 15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad64a80bac5346c54d0288543b987ba376fb63278f04dbdbdffd61ba44e796d6-merged.mount: Deactivated successfully.
Jan 31 08:46:21 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:46:21 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58003491' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:21 compute-0 nova_compute[240062]: 2026-01-31 08:46:21.395 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:21 compute-0 nova_compute[240062]: 2026-01-31 08:46:21.401 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:46:21 compute-0 podman[263021]: 2026-01-31 08:46:21.48735627 +0000 UTC m=+1.226010532 container remove 15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_solomon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:46:21 compute-0 nova_compute[240062]: 2026-01-31 08:46:21.516 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:46:21 compute-0 nova_compute[240062]: 2026-01-31 08:46:21.518 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:46:21 compute-0 nova_compute[240062]: 2026-01-31 08:46:21.518 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:21 compute-0 nova_compute[240062]: 2026-01-31 08:46:21.519 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:21 compute-0 nova_compute[240062]: 2026-01-31 08:46:21.519 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:46:21 compute-0 sudo[262942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:21 compute-0 systemd[1]: libpod-conmon-15568f80b59a4250198a58d0da96af0887fdc4e07ba1f2d5748335cd140938d1.scope: Deactivated successfully.
Jan 31 08:46:21 compute-0 sudo[263090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:21 compute-0 sudo[263090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:21 compute-0 sudo[263090]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:21 compute-0 sudo[263115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:46:21 compute-0 sudo[263115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:21 compute-0 ceph-mon[75294]: pgmap v1627: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:21 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/58003491' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:21 compute-0 podman[263152]: 2026-01-31 08:46:21.896372265 +0000 UTC m=+0.019301195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:22 compute-0 podman[263152]: 2026-01-31 08:46:22.028966762 +0000 UTC m=+0.151895672 container create cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:46:22 compute-0 systemd[1]: Started libpod-conmon-cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca.scope.
Jan 31 08:46:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:22 compute-0 podman[263152]: 2026-01-31 08:46:22.28076746 +0000 UTC m=+0.403696390 container init cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_sinoussi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:46:22 compute-0 podman[263152]: 2026-01-31 08:46:22.286434985 +0000 UTC m=+0.409363895 container start cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:46:22 compute-0 objective_sinoussi[263168]: 167 167
Jan 31 08:46:22 compute-0 systemd[1]: libpod-cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca.scope: Deactivated successfully.
Jan 31 08:46:22 compute-0 podman[263152]: 2026-01-31 08:46:22.342326267 +0000 UTC m=+0.465255187 container attach cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:46:22 compute-0 podman[263152]: 2026-01-31 08:46:22.344119863 +0000 UTC m=+0.467048783 container died cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:46:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-67335767259e603b9d266aabbc06c92555d03ef871265bc2bef2c69dfb6f9284-merged.mount: Deactivated successfully.
Jan 31 08:46:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:22 compute-0 podman[263152]: 2026-01-31 08:46:22.848557113 +0000 UTC m=+0.971486023 container remove cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_sinoussi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 31 08:46:22 compute-0 systemd[1]: libpod-conmon-cda467c02d6dc3b0e11ed7565ba6bc32b7e146b130f3dba86154c300cc57ddca.scope: Deactivated successfully.
Jan 31 08:46:23 compute-0 podman[263191]: 2026-01-31 08:46:23.050128495 +0000 UTC m=+0.101676785 container create aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:46:23 compute-0 podman[263191]: 2026-01-31 08:46:22.971007789 +0000 UTC m=+0.022556109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:23 compute-0 systemd[1]: Started libpod-conmon-aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807.scope.
Jan 31 08:46:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d59645d9f9a88d3e43b2a329d75c2691ef685b7db52351673e1f7e69f49154/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d59645d9f9a88d3e43b2a329d75c2691ef685b7db52351673e1f7e69f49154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d59645d9f9a88d3e43b2a329d75c2691ef685b7db52351673e1f7e69f49154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d59645d9f9a88d3e43b2a329d75c2691ef685b7db52351673e1f7e69f49154/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:23 compute-0 podman[263191]: 2026-01-31 08:46:23.23775507 +0000 UTC m=+0.289303390 container init aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goldstine, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:46:23 compute-0 podman[263191]: 2026-01-31 08:46:23.24592172 +0000 UTC m=+0.297470010 container start aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:46:23 compute-0 podman[263191]: 2026-01-31 08:46:23.308129322 +0000 UTC m=+0.359677612 container attach aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goldstine, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:46:23 compute-0 magical_goldstine[263208]: {
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:     "0": [
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:         {
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "devices": [
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "/dev/loop3"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             ],
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_name": "ceph_lv0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_size": "21470642176",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "name": "ceph_lv0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "tags": {
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cluster_name": "ceph",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.crush_device_class": "",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.encrypted": "0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.objectstore": "bluestore",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osd_id": "0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.type": "block",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.vdo": "0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.with_tpm": "0"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             },
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "type": "block",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "vg_name": "ceph_vg0"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:         }
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:     ],
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:     "1": [
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:         {
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "devices": [
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "/dev/loop4"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             ],
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_name": "ceph_lv1",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_size": "21470642176",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "name": "ceph_lv1",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "tags": {
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cluster_name": "ceph",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.crush_device_class": "",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.encrypted": "0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.objectstore": "bluestore",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osd_id": "1",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.type": "block",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.vdo": "0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.with_tpm": "0"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             },
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "type": "block",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "vg_name": "ceph_vg1"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:         }
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:     ],
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:     "2": [
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:         {
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "devices": [
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "/dev/loop5"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             ],
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_name": "ceph_lv2",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_size": "21470642176",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "name": "ceph_lv2",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "tags": {
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.cluster_name": "ceph",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.crush_device_class": "",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.encrypted": "0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.objectstore": "bluestore",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osd_id": "2",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.type": "block",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.vdo": "0",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:                 "ceph.with_tpm": "0"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             },
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "type": "block",
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:             "vg_name": "ceph_vg2"
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:         }
Jan 31 08:46:23 compute-0 magical_goldstine[263208]:     ]
Jan 31 08:46:23 compute-0 magical_goldstine[263208]: }
Jan 31 08:46:23 compute-0 systemd[1]: libpod-aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807.scope: Deactivated successfully.
Jan 31 08:46:23 compute-0 podman[263191]: 2026-01-31 08:46:23.580830417 +0000 UTC m=+0.632378727 container died aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:46:23 compute-0 nova_compute[240062]: 2026-01-31 08:46:23.653 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:23 compute-0 nova_compute[240062]: 2026-01-31 08:46:23.656 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:23 compute-0 nova_compute[240062]: 2026-01-31 08:46:23.656 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:46:23 compute-0 nova_compute[240062]: 2026-01-31 08:46:23.656 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-91d59645d9f9a88d3e43b2a329d75c2691ef685b7db52351673e1f7e69f49154-merged.mount: Deactivated successfully.
Jan 31 08:46:23 compute-0 podman[263191]: 2026-01-31 08:46:23.807928093 +0000 UTC m=+0.859476383 container remove aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:46:23 compute-0 systemd[1]: libpod-conmon-aaf71716cfb76f185db591d19cbfda0e7a2d4071e83378534ae4490c37d61807.scope: Deactivated successfully.
Jan 31 08:46:23 compute-0 sudo[263115]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:23 compute-0 sudo[263228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:23 compute-0 ceph-mon[75294]: pgmap v1628: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:23 compute-0 sudo[263228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:23 compute-0 sudo[263228]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:23 compute-0 sudo[263253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:46:23 compute-0 sudo[263253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:24 compute-0 nova_compute[240062]: 2026-01-31 08:46:24.292 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:46:24 compute-0 nova_compute[240062]: 2026-01-31 08:46:24.293 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:24 compute-0 nova_compute[240062]: 2026-01-31 08:46:24.293 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:24 compute-0 podman[263289]: 2026-01-31 08:46:24.284835078 +0000 UTC m=+0.024009626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:24 compute-0 podman[263289]: 2026-01-31 08:46:24.3938656 +0000 UTC m=+0.133040128 container create a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:46:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:24 compute-0 systemd[1]: Started libpod-conmon-a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738.scope.
Jan 31 08:46:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:24 compute-0 podman[263289]: 2026-01-31 08:46:24.563915225 +0000 UTC m=+0.303089783 container init a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 08:46:24 compute-0 podman[263289]: 2026-01-31 08:46:24.569696443 +0000 UTC m=+0.308870971 container start a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:46:24 compute-0 hardcore_merkle[263305]: 167 167
Jan 31 08:46:24 compute-0 systemd[1]: libpod-a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738.scope: Deactivated successfully.
Jan 31 08:46:24 compute-0 podman[263289]: 2026-01-31 08:46:24.67187577 +0000 UTC m=+0.411050328 container attach a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:46:24 compute-0 podman[263289]: 2026-01-31 08:46:24.67229789 +0000 UTC m=+0.411472418 container died a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:46:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0e1a0a9ee2f67c20a7f934b6753e65f447a9f56b88b7b8bb117bf8333731f34-merged.mount: Deactivated successfully.
Jan 31 08:46:25 compute-0 podman[263289]: 2026-01-31 08:46:25.074385349 +0000 UTC m=+0.813559877 container remove a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:46:25 compute-0 systemd[1]: libpod-conmon-a06e8d3390e20c68778ad5cce78c57d3bf84c54dcd61917b20c8f381c3c78738.scope: Deactivated successfully.
Jan 31 08:46:25 compute-0 podman[263329]: 2026-01-31 08:46:25.256435222 +0000 UTC m=+0.075551346 container create 025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:46:25 compute-0 podman[263329]: 2026-01-31 08:46:25.210531846 +0000 UTC m=+0.029648000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:25 compute-0 systemd[1]: Started libpod-conmon-025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88.scope.
Jan 31 08:46:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7055ac0816690dc24788085de64614e136d0b0ad410921493965983ea6711bf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7055ac0816690dc24788085de64614e136d0b0ad410921493965983ea6711bf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7055ac0816690dc24788085de64614e136d0b0ad410921493965983ea6711bf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7055ac0816690dc24788085de64614e136d0b0ad410921493965983ea6711bf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:25 compute-0 podman[263329]: 2026-01-31 08:46:25.436645057 +0000 UTC m=+0.255761191 container init 025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:46:25 compute-0 podman[263329]: 2026-01-31 08:46:25.444796436 +0000 UTC m=+0.263912560 container start 025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pike, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:46:25 compute-0 podman[263329]: 2026-01-31 08:46:25.580895871 +0000 UTC m=+0.400012015 container attach 025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pike, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:46:26 compute-0 ceph-mon[75294]: pgmap v1629: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:26 compute-0 nova_compute[240062]: 2026-01-31 08:46:26.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:26 compute-0 lvm[263423]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:46:26 compute-0 lvm[263423]: VG ceph_vg0 finished
Jan 31 08:46:26 compute-0 lvm[263426]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:46:26 compute-0 lvm[263426]: VG ceph_vg1 finished
Jan 31 08:46:26 compute-0 lvm[263428]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:46:26 compute-0 lvm[263428]: VG ceph_vg2 finished
Jan 31 08:46:26 compute-0 modest_pike[263347]: {}
Jan 31 08:46:26 compute-0 systemd[1]: libpod-025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88.scope: Deactivated successfully.
Jan 31 08:46:26 compute-0 systemd[1]: libpod-025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88.scope: Consumed 1.203s CPU time.
Jan 31 08:46:26 compute-0 podman[263329]: 2026-01-31 08:46:26.315500426 +0000 UTC m=+1.134616550 container died 025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:46:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7055ac0816690dc24788085de64614e136d0b0ad410921493965983ea6711bf2-merged.mount: Deactivated successfully.
Jan 31 08:46:27 compute-0 nova_compute[240062]: 2026-01-31 08:46:27.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:27 compute-0 podman[263329]: 2026-01-31 08:46:27.307359478 +0000 UTC m=+2.126475602 container remove 025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:46:27 compute-0 systemd[1]: libpod-conmon-025713f93d9e02b3d475bad0fbd6f76512920f39c382b1dad70c106ee9520b88.scope: Deactivated successfully.
Jan 31 08:46:27 compute-0 sudo[263253]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:46:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:46:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:46:27 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:46:27 compute-0 sudo[263444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:46:27 compute-0 sudo[263444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-0 sudo[263444]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:46:28 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 7240 writes, 32K keys, 7240 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7239 writes, 7239 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1306 writes, 5935 keys, 1306 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s
                                           Interval WAL: 1305 writes, 1305 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.4      2.33              0.08        19    0.123       0      0       0.0       0.0
                                             L6      1/0    9.11 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5     36.4     29.7      4.71              0.26        18    0.262     89K    10K       0.0       0.0
                                            Sum      1/0    9.11 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     24.3     25.6      7.04              0.34        37    0.190     89K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     14.6     15.1      2.83              0.08         8    0.353     23K   2526       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     36.4     29.7      4.71              0.26        18    0.262     89K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.8      2.27              0.08        18    0.126       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.039, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.06 MB/s write, 0.17 GB read, 0.06 MB/s read, 7.0 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 2.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55cc8bf858d0#2 capacity: 304.00 MB usage: 21.16 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000328 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1316,20.47 MB,6.73221%) FilterBlock(38,251.11 KB,0.0806658%) IndexBlock(38,457.16 KB,0.146856%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:46:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:28 compute-0 ceph-mon[75294]: pgmap v1630: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:46:28 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:46:29 compute-0 ceph-mon[75294]: pgmap v1631: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:30 compute-0 nova_compute[240062]: 2026-01-31 08:46:30.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:30 compute-0 nova_compute[240062]: 2026-01-31 08:46:30.156 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:46:30 compute-0 nova_compute[240062]: 2026-01-31 08:46:30.304 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:46:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:32 compute-0 ceph-mon[75294]: pgmap v1632: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:32 compute-0 sshd-session[263470]: Invalid user sol from 80.94.92.182 port 45152
Jan 31 08:46:32 compute-0 podman[263472]: 2026-01-31 08:46:32.878945836 +0000 UTC m=+0.053650236 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:46:33 compute-0 sshd-session[263470]: Connection closed by invalid user sol 80.94.92.182 port 45152 [preauth]
Jan 31 08:46:34 compute-0 nova_compute[240062]: 2026-01-31 08:46:34.299 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:34 compute-0 ceph-mon[75294]: pgmap v1633: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:35 compute-0 ceph-mon[75294]: pgmap v1634: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:37 compute-0 nova_compute[240062]: 2026-01-31 08:46:37.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:37 compute-0 ceph-mon[75294]: pgmap v1635: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:39 compute-0 podman[263492]: 2026-01-31 08:46:39.198071628 +0000 UTC m=+0.072288023 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:46:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:46:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353628743' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:46:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:46:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353628743' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:46:40 compute-0 ceph-mon[75294]: pgmap v1636: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2353628743' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:46:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/2353628743' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:46:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:42 compute-0 ceph-mon[75294]: pgmap v1637: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:43 compute-0 ceph-mon[75294]: pgmap v1638: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:45 compute-0 ceph-mon[75294]: pgmap v1639: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:46:46.991 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:46:46.992 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:46:46.992 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:47 compute-0 nova_compute[240062]: 2026-01-31 08:46:47.480 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:47 compute-0 ceph-mon[75294]: pgmap v1640: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:50 compute-0 ceph-mon[75294]: pgmap v1641: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:46:50
Jan 31 08:46:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:46:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:46:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Jan 31 08:46:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:46:52 compute-0 ceph-mon[75294]: pgmap v1642: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:54 compute-0 ceph-mon[75294]: pgmap v1643: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:46:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:46:56 compute-0 ceph-mon[75294]: pgmap v1644: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:57 compute-0 ceph-mon[75294]: pgmap v1645: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:59 compute-0 ceph-mon[75294]: pgmap v1646: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:01 compute-0 ceph-mon[75294]: pgmap v1647: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:03 compute-0 podman[263516]: 2026-01-31 08:47:03.166977254 +0000 UTC m=+0.041372811 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 31 08:47:04 compute-0 ceph-mon[75294]: pgmap v1648: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:06 compute-0 ceph-mon[75294]: pgmap v1649: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033373541452226877 of space, bias 1.0, pg target 0.10012062435668063 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9712964173615222e-06 of space, bias 4.0, pg target 0.002365555700833827 quantized to 16 (current 16)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:47:07 compute-0 ceph-mon[75294]: pgmap v1650: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:09 compute-0 ceph-mon[75294]: pgmap v1651: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:10 compute-0 podman[263535]: 2026-01-31 08:47:10.206870779 +0000 UTC m=+0.077299221 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:47:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:11 compute-0 ceph-mon[75294]: pgmap v1652: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 31 08:47:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 31 08:47:12 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 31 08:47:13 compute-0 ceph-mon[75294]: pgmap v1653: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:13 compute-0 ceph-mon[75294]: osdmap e153: 3 total, 3 up, 3 in
Jan 31 08:47:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:47:16 compute-0 ceph-mon[75294]: pgmap v1655: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:47:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:47:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.190 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.191 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.191 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.191 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.192 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:18 compute-0 ceph-mon[75294]: pgmap v1656: 305 pgs: 305 active+clean; 21 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 16 op/s
Jan 31 08:47:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Jan 31 08:47:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:47:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/766027026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.757 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.927 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.929 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5110MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.929 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:18 compute-0 nova_compute[240062]: 2026-01-31 08:47:18.929 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.298 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.298 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.383 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing inventories for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.476 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating ProviderTree inventory for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.476 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Updating inventory in ProviderTree for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.491 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing aggregate associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:47:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/766027026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.514 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Refreshing trait associations for resource provider 4da0c29a-ac15-4049-acad-d0fd4b82723a, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NODE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:47:19 compute-0 nova_compute[240062]: 2026-01-31 08:47:19.528 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:20 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:47:20 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083021054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:20 compute-0 nova_compute[240062]: 2026-01-31 08:47:20.117 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:20 compute-0 nova_compute[240062]: 2026-01-31 08:47:20.122 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:47:20 compute-0 nova_compute[240062]: 2026-01-31 08:47:20.148 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:47:20 compute-0 nova_compute[240062]: 2026-01-31 08:47:20.150 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:47:20 compute-0 nova_compute[240062]: 2026-01-31 08:47:20.150 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:47:20 compute-0 ceph-mon[75294]: pgmap v1657: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Jan 31 08:47:20 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2083021054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:21 compute-0 nova_compute[240062]: 2026-01-31 08:47:21.149 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:21 compute-0 nova_compute[240062]: 2026-01-31 08:47:21.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:21 compute-0 nova_compute[240062]: 2026-01-31 08:47:21.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:21 compute-0 nova_compute[240062]: 2026-01-31 08:47:21.292 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:21 compute-0 nova_compute[240062]: 2026-01-31 08:47:21.292 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:47:21 compute-0 ceph-mon[75294]: pgmap v1658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:47:22 compute-0 nova_compute[240062]: 2026-01-31 08:47:22.297 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:22 compute-0 nova_compute[240062]: 2026-01-31 08:47:22.297 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:47:22 compute-0 nova_compute[240062]: 2026-01-31 08:47:22.297 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:47:22 compute-0 nova_compute[240062]: 2026-01-31 08:47:22.321 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:47:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:47:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 31 08:47:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 31 08:47:22 compute-0 ceph-mon[75294]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 31 08:47:23 compute-0 nova_compute[240062]: 2026-01-31 08:47:23.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:23 compute-0 ceph-mon[75294]: pgmap v1659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:47:23 compute-0 ceph-mon[75294]: osdmap e154: 3 total, 3 up, 3 in
Jan 31 08:47:24 compute-0 nova_compute[240062]: 2026-01-31 08:47:24.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 716 B/s wr, 8 op/s
Jan 31 08:47:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:25 compute-0 ceph-mon[75294]: pgmap v1661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 716 B/s wr, 8 op/s
Jan 31 08:47:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 716 B/s wr, 8 op/s
Jan 31 08:47:27 compute-0 nova_compute[240062]: 2026-01-31 08:47:27.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:27 compute-0 nova_compute[240062]: 2026-01-31 08:47:27.156 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:27 compute-0 sudo[263606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:27 compute-0 sudo[263606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:27 compute-0 sudo[263606]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:27 compute-0 sudo[263631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:47:27 compute-0 sudo[263631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:27 compute-0 ceph-mon[75294]: pgmap v1662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 716 B/s wr, 8 op/s
Jan 31 08:47:28 compute-0 sudo[263631]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:47:28 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:47:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:47:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:47:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:47:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:47:28 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:47:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:47:28 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:47:28 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:47:28 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:28 compute-0 sudo[263688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:28 compute-0 sudo[263688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:28 compute-0 sudo[263688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:28 compute-0 sudo[263713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:47:28 compute-0 sudo[263713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:47:28 compute-0 podman[263751]: 2026-01-31 08:47:28.657389799 +0000 UTC m=+0.040917291 container create 05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:47:28 compute-0 systemd[1]: Started libpod-conmon-05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19.scope.
Jan 31 08:47:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:28 compute-0 podman[263751]: 2026-01-31 08:47:28.634748401 +0000 UTC m=+0.018275923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:28 compute-0 podman[263751]: 2026-01-31 08:47:28.742314683 +0000 UTC m=+0.125842175 container init 05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:47:28 compute-0 podman[263751]: 2026-01-31 08:47:28.748622333 +0000 UTC m=+0.132149815 container start 05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:47:28 compute-0 goofy_banach[263767]: 167 167
Jan 31 08:47:28 compute-0 systemd[1]: libpod-05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19.scope: Deactivated successfully.
Jan 31 08:47:28 compute-0 podman[263751]: 2026-01-31 08:47:28.756308989 +0000 UTC m=+0.139836501 container attach 05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:47:28 compute-0 podman[263751]: 2026-01-31 08:47:28.75673766 +0000 UTC m=+0.140265162 container died 05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:47:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7e22834d00936aea665580c980005fa303e19ab51c3e6bdf7922848b504e928-merged.mount: Deactivated successfully.
Jan 31 08:47:28 compute-0 podman[263751]: 2026-01-31 08:47:28.81735098 +0000 UTC m=+0.200878472 container remove 05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:47:28 compute-0 systemd[1]: libpod-conmon-05da2546a4b7d53ce1b485022dc532bfaa44fdd8ad2189124896ff335791ad19.scope: Deactivated successfully.
Jan 31 08:47:29 compute-0 podman[263792]: 2026-01-31 08:47:28.914198563 +0000 UTC m=+0.019280849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:29 compute-0 podman[263792]: 2026-01-31 08:47:29.012372562 +0000 UTC m=+0.117454828 container create d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:47:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:47:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:47:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:47:29 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:29 compute-0 systemd[1]: Started libpod-conmon-d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46.scope.
Jan 31 08:47:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbd475a585fc88c77f177e975afbfe630065c6201c7b2dc6ee767b77693127c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbd475a585fc88c77f177e975afbfe630065c6201c7b2dc6ee767b77693127c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbd475a585fc88c77f177e975afbfe630065c6201c7b2dc6ee767b77693127c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbd475a585fc88c77f177e975afbfe630065c6201c7b2dc6ee767b77693127c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbd475a585fc88c77f177e975afbfe630065c6201c7b2dc6ee767b77693127c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:29 compute-0 podman[263792]: 2026-01-31 08:47:29.244449781 +0000 UTC m=+0.349532077 container init d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_banzai, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:47:29 compute-0 podman[263792]: 2026-01-31 08:47:29.250185615 +0000 UTC m=+0.355267881 container start d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_banzai, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:47:29 compute-0 podman[263792]: 2026-01-31 08:47:29.284684822 +0000 UTC m=+0.389767098 container attach d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_banzai, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:47:29 compute-0 optimistic_banzai[263809]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:47:29 compute-0 optimistic_banzai[263809]: --> All data devices are unavailable
Jan 31 08:47:29 compute-0 systemd[1]: libpod-d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46.scope: Deactivated successfully.
Jan 31 08:47:29 compute-0 podman[263792]: 2026-01-31 08:47:29.673494914 +0000 UTC m=+0.778577180 container died d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 08:47:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-abbd475a585fc88c77f177e975afbfe630065c6201c7b2dc6ee767b77693127c-merged.mount: Deactivated successfully.
Jan 31 08:47:30 compute-0 podman[263792]: 2026-01-31 08:47:30.15358027 +0000 UTC m=+1.258662536 container remove d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:47:30 compute-0 sudo[263713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:30 compute-0 systemd[1]: libpod-conmon-d1d5be13e23a604a4a4ebf3f839633fe521f34cdf946a6752e9930dac2b35f46.scope: Deactivated successfully.
Jan 31 08:47:30 compute-0 sudo[263842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:30 compute-0 sudo[263842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:30 compute-0 sudo[263842]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:30 compute-0 sudo[263867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:47:30 compute-0 sudo[263867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:30 compute-0 ceph-mon[75294]: pgmap v1663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 08:47:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:30 compute-0 podman[263905]: 2026-01-31 08:47:30.589459277 +0000 UTC m=+0.036966014 container create ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:47:30 compute-0 systemd[1]: Started libpod-conmon-ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788.scope.
Jan 31 08:47:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:30 compute-0 podman[263905]: 2026-01-31 08:47:30.570443946 +0000 UTC m=+0.017950623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:30 compute-0 podman[263905]: 2026-01-31 08:47:30.681414219 +0000 UTC m=+0.128920896 container init ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:47:30 compute-0 podman[263905]: 2026-01-31 08:47:30.686323012 +0000 UTC m=+0.133829669 container start ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:30 compute-0 vigilant_cannon[263921]: 167 167
Jan 31 08:47:30 compute-0 systemd[1]: libpod-ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788.scope: Deactivated successfully.
Jan 31 08:47:30 compute-0 conmon[263921]: conmon ab9e6062fe83d56df3fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788.scope/container/memory.events
Jan 31 08:47:30 compute-0 podman[263905]: 2026-01-31 08:47:30.701618022 +0000 UTC m=+0.149124779 container attach ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cannon, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:30 compute-0 podman[263905]: 2026-01-31 08:47:30.702083905 +0000 UTC m=+0.149590552 container died ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:47:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb7d5d5bc144f2a8fc986cb6e0a70f7d9cc3f59551d49a8a033a1552d36e4d3c-merged.mount: Deactivated successfully.
Jan 31 08:47:30 compute-0 podman[263905]: 2026-01-31 08:47:30.800341156 +0000 UTC m=+0.247847803 container remove ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:47:30 compute-0 systemd[1]: libpod-conmon-ab9e6062fe83d56df3fcba3c4e085447e40a49671d55bbb7d2e733d80c256788.scope: Deactivated successfully.
Jan 31 08:47:30 compute-0 podman[263944]: 2026-01-31 08:47:30.924861034 +0000 UTC m=+0.040139381 container create 059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:47:30 compute-0 systemd[1]: Started libpod-conmon-059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d.scope.
Jan 31 08:47:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9735714cd942ba82fad4c7c4b985c902b8853e688aa75fbd6cf3882f3b01203a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9735714cd942ba82fad4c7c4b985c902b8853e688aa75fbd6cf3882f3b01203a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9735714cd942ba82fad4c7c4b985c902b8853e688aa75fbd6cf3882f3b01203a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9735714cd942ba82fad4c7c4b985c902b8853e688aa75fbd6cf3882f3b01203a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:31 compute-0 podman[263944]: 2026-01-31 08:47:30.903350996 +0000 UTC m=+0.018629373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:31 compute-0 podman[263944]: 2026-01-31 08:47:31.013914418 +0000 UTC m=+0.129192795 container init 059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_brattain, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:47:31 compute-0 podman[263944]: 2026-01-31 08:47:31.019418555 +0000 UTC m=+0.134696902 container start 059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:47:31 compute-0 podman[263944]: 2026-01-31 08:47:31.02929215 +0000 UTC m=+0.144570537 container attach 059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_brattain, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]: {
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:     "0": [
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:         {
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "devices": [
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "/dev/loop3"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             ],
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_name": "ceph_lv0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_size": "21470642176",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "name": "ceph_lv0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "tags": {
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cluster_name": "ceph",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.crush_device_class": "",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.encrypted": "0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.objectstore": "bluestore",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osd_id": "0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.type": "block",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.vdo": "0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.with_tpm": "0"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             },
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "type": "block",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "vg_name": "ceph_vg0"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:         }
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:     ],
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:     "1": [
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:         {
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "devices": [
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "/dev/loop4"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             ],
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_name": "ceph_lv1",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_size": "21470642176",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "name": "ceph_lv1",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "tags": {
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cluster_name": "ceph",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.crush_device_class": "",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.encrypted": "0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.objectstore": "bluestore",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osd_id": "1",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.type": "block",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.vdo": "0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.with_tpm": "0"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             },
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "type": "block",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "vg_name": "ceph_vg1"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:         }
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:     ],
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:     "2": [
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:         {
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "devices": [
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "/dev/loop5"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             ],
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_name": "ceph_lv2",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_size": "21470642176",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "name": "ceph_lv2",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "tags": {
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.cluster_name": "ceph",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.crush_device_class": "",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.encrypted": "0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.objectstore": "bluestore",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osd_id": "2",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.type": "block",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.vdo": "0",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:                 "ceph.with_tpm": "0"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             },
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "type": "block",
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:             "vg_name": "ceph_vg2"
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:         }
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]:     ]
Jan 31 08:47:31 compute-0 quizzical_brattain[263961]: }
Jan 31 08:47:31 compute-0 systemd[1]: libpod-059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d.scope: Deactivated successfully.
Jan 31 08:47:31 compute-0 conmon[263961]: conmon 059a52fc412a1a2303f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d.scope/container/memory.events
Jan 31 08:47:31 compute-0 podman[263944]: 2026-01-31 08:47:31.308319782 +0000 UTC m=+0.423598139 container died 059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9735714cd942ba82fad4c7c4b985c902b8853e688aa75fbd6cf3882f3b01203a-merged.mount: Deactivated successfully.
Jan 31 08:47:31 compute-0 podman[263944]: 2026-01-31 08:47:31.393935203 +0000 UTC m=+0.509213560 container remove 059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:47:31 compute-0 systemd[1]: libpod-conmon-059a52fc412a1a2303f2f7b92fa3aad1f6d95c5d4c83b12ac1781856a5aabf1d.scope: Deactivated successfully.
Jan 31 08:47:31 compute-0 sudo[263867]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:31 compute-0 sudo[263984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:31 compute-0 sudo[263984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:31 compute-0 sudo[263984]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:31 compute-0 sudo[264009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:47:31 compute-0 sudo[264009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:31 compute-0 podman[264045]: 2026-01-31 08:47:31.833821308 +0000 UTC m=+0.088221943 container create b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:47:31 compute-0 podman[264045]: 2026-01-31 08:47:31.764804383 +0000 UTC m=+0.019205038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:31 compute-0 systemd[1]: Started libpod-conmon-b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107.scope.
Jan 31 08:47:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:31 compute-0 podman[264045]: 2026-01-31 08:47:31.977413788 +0000 UTC m=+0.231814433 container init b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lalande, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:47:31 compute-0 podman[264045]: 2026-01-31 08:47:31.98234059 +0000 UTC m=+0.236741225 container start b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lalande, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:47:31 compute-0 charming_lalande[264061]: 167 167
Jan 31 08:47:31 compute-0 systemd[1]: libpod-b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107.scope: Deactivated successfully.
Jan 31 08:47:32 compute-0 podman[264045]: 2026-01-31 08:47:32.068347153 +0000 UTC m=+0.322747808 container attach b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:47:32 compute-0 podman[264045]: 2026-01-31 08:47:32.068713562 +0000 UTC m=+0.323114217 container died b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:47:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-59282f9e5a7aad4293aee24ba7fafe7d462d01080d0345e420092a883404aa1a-merged.mount: Deactivated successfully.
Jan 31 08:47:32 compute-0 podman[264045]: 2026-01-31 08:47:32.376176886 +0000 UTC m=+0.630577521 container remove b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:47:32 compute-0 systemd[1]: libpod-conmon-b5d4b9b9a6440feae120daeea9bd229a7b5e30e0ae567638a2e09aac92782107.scope: Deactivated successfully.
Jan 31 08:47:32 compute-0 ceph-mon[75294]: pgmap v1664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:32 compute-0 podman[264087]: 2026-01-31 08:47:32.509885521 +0000 UTC m=+0.040042738 container create 359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:47:32 compute-0 systemd[1]: Started libpod-conmon-359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255.scope.
Jan 31 08:47:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d843c641c9faafb3ef9cab2bbde9ad20119cd787e78b3eb26dd71733584b1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d843c641c9faafb3ef9cab2bbde9ad20119cd787e78b3eb26dd71733584b1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d843c641c9faafb3ef9cab2bbde9ad20119cd787e78b3eb26dd71733584b1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d843c641c9faafb3ef9cab2bbde9ad20119cd787e78b3eb26dd71733584b1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:32 compute-0 podman[264087]: 2026-01-31 08:47:32.585446743 +0000 UTC m=+0.115603980 container init 359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williamson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:47:32 compute-0 podman[264087]: 2026-01-31 08:47:32.49199058 +0000 UTC m=+0.022147807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:32 compute-0 podman[264087]: 2026-01-31 08:47:32.59240697 +0000 UTC m=+0.122564177 container start 359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:47:32 compute-0 podman[264087]: 2026-01-31 08:47:32.602492161 +0000 UTC m=+0.132649538 container attach 359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williamson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:47:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:33 compute-0 lvm[264187]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:47:33 compute-0 lvm[264188]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:47:33 compute-0 lvm[264188]: VG ceph_vg1 finished
Jan 31 08:47:33 compute-0 lvm[264187]: VG ceph_vg0 finished
Jan 31 08:47:33 compute-0 lvm[264194]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:47:33 compute-0 lvm[264194]: VG ceph_vg2 finished
Jan 31 08:47:33 compute-0 podman[264178]: 2026-01-31 08:47:33.283080766 +0000 UTC m=+0.090934106 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:47:33 compute-0 xenodochial_williamson[264103]: {}
Jan 31 08:47:33 compute-0 systemd[1]: libpod-359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255.scope: Deactivated successfully.
Jan 31 08:47:33 compute-0 systemd[1]: libpod-359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255.scope: Consumed 1.070s CPU time.
Jan 31 08:47:33 compute-0 conmon[264103]: conmon 359b4be895ee9efd6e66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255.scope/container/memory.events
Jan 31 08:47:33 compute-0 podman[264087]: 2026-01-31 08:47:33.351036273 +0000 UTC m=+0.881193510 container died 359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williamson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:47:33 compute-0 ceph-mon[75294]: pgmap v1665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-25d843c641c9faafb3ef9cab2bbde9ad20119cd787e78b3eb26dd71733584b1e-merged.mount: Deactivated successfully.
Jan 31 08:47:33 compute-0 podman[264087]: 2026-01-31 08:47:33.691117945 +0000 UTC m=+1.221275162 container remove 359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:33 compute-0 systemd[1]: libpod-conmon-359b4be895ee9efd6e66834561e6e2908aac0fe620cb3ac93005ff4432da6255.scope: Deactivated successfully.
Jan 31 08:47:33 compute-0 sudo[264009]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:47:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:47:33 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:47:33 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:47:33 compute-0 sudo[264216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:47:33 compute-0 sudo[264216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:33 compute-0 sudo[264216]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:47:34 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:47:35 compute-0 ceph-mon[75294]: pgmap v1666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:38 compute-0 ceph-mon[75294]: pgmap v1667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:47:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/621339491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:47:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:47:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/621339491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:47:40 compute-0 ceph-mon[75294]: pgmap v1668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/621339491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:47:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/621339491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:47:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:41 compute-0 podman[264241]: 2026-01-31 08:47:41.223371156 +0000 UTC m=+0.089147447 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 08:47:42 compute-0 ceph-mon[75294]: pgmap v1669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:44 compute-0 ceph-mon[75294]: pgmap v1670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:46 compute-0 ceph-mon[75294]: pgmap v1671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:47:46.992 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:47:46.993 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:47:46.993 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:47 compute-0 ceph-mon[75294]: pgmap v1672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:49 compute-0 ceph-mon[75294]: pgmap v1673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:47:50
Jan 31 08:47:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:47:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:47:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'default.rgw.meta']
Jan 31 08:47:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:47:52 compute-0 ceph-mon[75294]: pgmap v1674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:54 compute-0 ceph-mon[75294]: pgmap v1675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:47:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:47:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:56 compute-0 ceph-mon[75294]: pgmap v1676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:57 compute-0 ceph-mon[75294]: pgmap v1677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:00 compute-0 ceph-mon[75294]: pgmap v1678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:02 compute-0 ceph-mon[75294]: pgmap v1679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.872407) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849282872450, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1275, "num_deletes": 258, "total_data_size": 1963309, "memory_usage": 1986976, "flush_reason": "Manual Compaction"}
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849282920894, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1933815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32448, "largest_seqno": 33722, "table_properties": {"data_size": 1927686, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12748, "raw_average_key_size": 19, "raw_value_size": 1915303, "raw_average_value_size": 2946, "num_data_blocks": 153, "num_entries": 650, "num_filter_entries": 650, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849154, "oldest_key_time": 1769849154, "file_creation_time": 1769849282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 48548 microseconds, and 4457 cpu microseconds.
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.920954) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1933815 bytes OK
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.920974) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.966500) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.966546) EVENT_LOG_v1 {"time_micros": 1769849282966536, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.966573) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1957534, prev total WAL file size 1957534, number of live WAL files 2.
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.967576) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1888KB)], [71(9327KB)]
Jan 31 08:48:02 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849282967619, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11485575, "oldest_snapshot_seqno": -1}
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5614 keys, 11380002 bytes, temperature: kUnknown
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849283175550, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11380002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11338856, "index_size": 25996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 140994, "raw_average_key_size": 25, "raw_value_size": 11233842, "raw_average_value_size": 2001, "num_data_blocks": 1073, "num_entries": 5614, "num_filter_entries": 5614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769849282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.175831) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11380002 bytes
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.217467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.2 rd, 54.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(11.8) write-amplify(5.9) OK, records in: 6146, records dropped: 532 output_compression: NoCompression
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.217524) EVENT_LOG_v1 {"time_micros": 1769849283217503, "job": 40, "event": "compaction_finished", "compaction_time_micros": 207997, "compaction_time_cpu_micros": 17081, "output_level": 6, "num_output_files": 1, "total_output_size": 11380002, "num_input_records": 6146, "num_output_records": 5614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849283218063, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849283219576, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:02.967523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.219616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.219622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.219623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.219625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:03 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:03.219628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:04 compute-0 podman[264267]: 2026-01-31 08:48:04.19857962 +0000 UTC m=+0.072238372 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 08:48:04 compute-0 ceph-mon[75294]: pgmap v1680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:05 compute-0 sshd-session[264286]: Connection closed by authenticating user root 193.32.162.145 port 40422 [preauth]
Jan 31 08:48:06 compute-0 ceph-mon[75294]: pgmap v1681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.998333557808532e-07 of space, bias 1.0, pg target 0.000269950006734256 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.912953184942191e-06 of space, bias 4.0, pg target 0.002295543821930629 quantized to 16 (current 16)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:48:07 compute-0 ceph-mon[75294]: pgmap v1682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:09 compute-0 ceph-mon[75294]: pgmap v1683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:11 compute-0 ceph-mon[75294]: pgmap v1684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:12 compute-0 podman[264288]: 2026-01-31 08:48:12.216180951 +0000 UTC m=+0.085602322 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:48:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:13 compute-0 ceph-mon[75294]: pgmap v1685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:15 compute-0 ceph-mon[75294]: pgmap v1686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.918815) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849297918859, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 361, "num_deletes": 251, "total_data_size": 237434, "memory_usage": 244928, "flush_reason": "Manual Compaction"}
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849297938820, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 208573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33723, "largest_seqno": 34083, "table_properties": {"data_size": 206362, "index_size": 375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5935, "raw_average_key_size": 20, "raw_value_size": 202052, "raw_average_value_size": 687, "num_data_blocks": 17, "num_entries": 294, "num_filter_entries": 294, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849283, "oldest_key_time": 1769849283, "file_creation_time": 1769849297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 20054 microseconds, and 1101 cpu microseconds.
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.938872) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 208573 bytes OK
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.938889) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.944259) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.944344) EVENT_LOG_v1 {"time_micros": 1769849297944329, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.944384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 235040, prev total WAL file size 236201, number of live WAL files 2.
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.945079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323531' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(203KB)], [74(10MB)]
Jan 31 08:48:17 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849297945139, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11588575, "oldest_snapshot_seqno": -1}
Jan 31 08:48:17 compute-0 ceph-mon[75294]: pgmap v1687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5401 keys, 8272137 bytes, temperature: kUnknown
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849298046229, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 8272137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8237184, "index_size": 20379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136744, "raw_average_key_size": 25, "raw_value_size": 8140532, "raw_average_value_size": 1507, "num_data_blocks": 839, "num_entries": 5401, "num_filter_entries": 5401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846184, "oldest_key_time": 0, "file_creation_time": 1769849297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3294ee4-27e2-4bb0-ad9a-134acd801483", "db_session_id": "Y7RM6XVJX1JMWBYCK9C2", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.046790) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 8272137 bytes
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.050135) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.5 rd, 81.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.9 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(95.2) write-amplify(39.7) OK, records in: 5908, records dropped: 507 output_compression: NoCompression
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.050173) EVENT_LOG_v1 {"time_micros": 1769849298050151, "job": 42, "event": "compaction_finished", "compaction_time_micros": 101171, "compaction_time_cpu_micros": 18182, "output_level": 6, "num_output_files": 1, "total_output_size": 8272137, "num_input_records": 5908, "num_output_records": 5401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849298050334, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849298051558, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:17.944996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.051722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.051729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.051731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.051733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:18 compute-0 ceph-mon[75294]: rocksdb: (Original Log Time 2026/01/31-08:48:18.051735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.237 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.238 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.238 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.238 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.239 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:48:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294124949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.814 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.997 240090 WARNING nova.virt.libvirt.driver [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.998 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5126MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.998 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:18 compute-0 nova_compute[240062]: 2026-01-31 08:48:18.999 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.078 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.078 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.095 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:19 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3294124949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:19 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:48:19 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020877051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.713 240090 DEBUG oslo_concurrency.processutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.718 240090 DEBUG nova.compute.provider_tree [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed in ProviderTree for provider: 4da0c29a-ac15-4049-acad-d0fd4b82723a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.770 240090 DEBUG nova.scheduler.client.report [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Inventory has not changed for provider 4da0c29a-ac15-4049-acad-d0fd4b82723a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.773 240090 DEBUG nova.compute.resource_tracker [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:48:19 compute-0 nova_compute[240062]: 2026-01-31 08:48:19.773 240090 DEBUG oslo_concurrency.lockutils [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:20 compute-0 ceph-mon[75294]: pgmap v1688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:20 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2020877051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:20 compute-0 nova_compute[240062]: 2026-01-31 08:48:20.774 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:20 compute-0 nova_compute[240062]: 2026-01-31 08:48:20.775 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:20 compute-0 nova_compute[240062]: 2026-01-31 08:48:20.775 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:48:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:48:22 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 6874 writes, 26K keys, 6874 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6874 writes, 1465 syncs, 4.69 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 325 writes, 616 keys, 325 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 325 writes, 157 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:48:22 compute-0 nova_compute[240062]: 2026-01-31 08:48:22.150 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:22 compute-0 nova_compute[240062]: 2026-01-31 08:48:22.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:22 compute-0 nova_compute[240062]: 2026-01-31 08:48:22.154 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:48:22 compute-0 nova_compute[240062]: 2026-01-31 08:48:22.154 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:48:22 compute-0 nova_compute[240062]: 2026-01-31 08:48:22.209 240090 DEBUG nova.compute.manager [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:48:22 compute-0 ceph-mon[75294]: pgmap v1689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:22 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:22 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:23 compute-0 ceph-mon[75294]: pgmap v1690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:24 compute-0 nova_compute[240062]: 2026-01-31 08:48:24.154 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:24 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:24 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:25 compute-0 nova_compute[240062]: 2026-01-31 08:48:25.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:25 compute-0 ceph-mon[75294]: pgmap v1691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:48:25 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 8190 writes, 32K keys, 8190 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8190 writes, 1880 syncs, 4.36 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 271 writes, 571 keys, 271 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 271 writes, 126 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:48:26 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:27 compute-0 nova_compute[240062]: 2026-01-31 08:48:27.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:27 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:27 compute-0 ceph-mon[75294]: pgmap v1692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:28 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:29 compute-0 nova_compute[240062]: 2026-01-31 08:48:29.155 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:48:30 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 7109 writes, 27K keys, 7109 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7109 writes, 1507 syncs, 4.72 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 338 writes, 698 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 338 writes, 160 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:48:30 compute-0 ceph-mon[75294]: pgmap v1693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:30 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:31 compute-0 ceph-mon[75294]: pgmap v1694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:32 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:32 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:33 compute-0 sudo[264357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:33 compute-0 sudo[264357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:33 compute-0 sudo[264357]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:33 compute-0 sudo[264382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:48:33 compute-0 sudo[264382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:34 compute-0 nova_compute[240062]: 2026-01-31 08:48:34.149 240090 DEBUG oslo_service.periodic_task [None req-6f70d116-6ffe-4252-a6ee-d43408d6ef16 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:34 compute-0 sudo[264382]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:34 compute-0 ceph-mon[75294]: pgmap v1695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:48:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:34 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:48:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:48:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:48:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:48:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:48:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:48:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:48:34 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:48:34 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:48:34 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:34 compute-0 sudo[264438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:34 compute-0 sudo[264438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:34 compute-0 sudo[264438]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:34 compute-0 sudo[264469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:48:34 compute-0 podman[264462]: 2026-01-31 08:48:34.766434939 +0000 UTC m=+0.055224155 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:48:34 compute-0 sudo[264469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:35 compute-0 podman[264518]: 2026-01-31 08:48:35.117438375 +0000 UTC m=+0.095465107 container create 59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:48:35 compute-0 podman[264518]: 2026-01-31 08:48:35.050962178 +0000 UTC m=+0.028988900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:35 compute-0 systemd[1]: Started libpod-conmon-59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69.scope.
Jan 31 08:48:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:35 compute-0 podman[264518]: 2026-01-31 08:48:35.554887154 +0000 UTC m=+0.532913896 container init 59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:48:35 compute-0 podman[264518]: 2026-01-31 08:48:35.56254538 +0000 UTC m=+0.540572112 container start 59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:48:35 compute-0 vigilant_bardeen[264534]: 167 167
Jan 31 08:48:35 compute-0 systemd[1]: libpod-59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69.scope: Deactivated successfully.
Jan 31 08:48:35 compute-0 podman[264518]: 2026-01-31 08:48:35.810468115 +0000 UTC m=+0.788494877 container attach 59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:48:35 compute-0 podman[264518]: 2026-01-31 08:48:35.811994366 +0000 UTC m=+0.790021128 container died 59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:48:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:35 compute-0 ceph-mon[75294]: pgmap v1696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:48:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:48:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:48:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:48:35 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c92d85649f3e4bc66149ae6eb60b8635e9bbcfa72061e71c97e6120f7cbce2-merged.mount: Deactivated successfully.
Jan 31 08:48:36 compute-0 podman[264518]: 2026-01-31 08:48:36.372064612 +0000 UTC m=+1.350091334 container remove 59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_bardeen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:48:36 compute-0 systemd[1]: libpod-conmon-59d1984146ab08e3e017c8abf14df24650829b0488f18e0e3fa76f5f7f515a69.scope: Deactivated successfully.
Jan 31 08:48:36 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:36 compute-0 podman[264559]: 2026-01-31 08:48:36.550152728 +0000 UTC m=+0.093883904 container create 0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bose, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:48:36 compute-0 podman[264559]: 2026-01-31 08:48:36.479053297 +0000 UTC m=+0.022784493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:36 compute-0 systemd[1]: Started libpod-conmon-0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1.scope.
Jan 31 08:48:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331e726cb7b57af72284eb2498b2a76704cadd8f6ee7829ad7217bf154681909/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331e726cb7b57af72284eb2498b2a76704cadd8f6ee7829ad7217bf154681909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331e726cb7b57af72284eb2498b2a76704cadd8f6ee7829ad7217bf154681909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331e726cb7b57af72284eb2498b2a76704cadd8f6ee7829ad7217bf154681909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331e726cb7b57af72284eb2498b2a76704cadd8f6ee7829ad7217bf154681909/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:36 compute-0 podman[264559]: 2026-01-31 08:48:36.713988203 +0000 UTC m=+0.257719389 container init 0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bose, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:48:36 compute-0 podman[264559]: 2026-01-31 08:48:36.720361434 +0000 UTC m=+0.264092620 container start 0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:48:36 compute-0 podman[264559]: 2026-01-31 08:48:36.743190958 +0000 UTC m=+0.286922164 container attach 0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bose, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:48:37 compute-0 ceph-mgr[75591]: [devicehealth INFO root] Check health
Jan 31 08:48:37 compute-0 infallible_bose[264575]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:48:37 compute-0 infallible_bose[264575]: --> All data devices are unavailable
Jan 31 08:48:37 compute-0 systemd[1]: libpod-0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1.scope: Deactivated successfully.
Jan 31 08:48:37 compute-0 podman[264559]: 2026-01-31 08:48:37.171962144 +0000 UTC m=+0.715693340 container died 0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 08:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-331e726cb7b57af72284eb2498b2a76704cadd8f6ee7829ad7217bf154681909-merged.mount: Deactivated successfully.
Jan 31 08:48:37 compute-0 podman[264559]: 2026-01-31 08:48:37.315506363 +0000 UTC m=+0.859237539 container remove 0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bose, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:48:37 compute-0 systemd[1]: libpod-conmon-0e2f914a60c5efcba473e2b1bf16edd3a3700e533cd4b4cd641305d6b0388bf1.scope: Deactivated successfully.
Jan 31 08:48:37 compute-0 sudo[264469]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:37 compute-0 sudo[264607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:37 compute-0 sudo[264607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:37 compute-0 sudo[264607]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:37 compute-0 sudo[264632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- lvm list --format json
Jan 31 08:48:37 compute-0 sudo[264632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:37 compute-0 podman[264666]: 2026-01-31 08:48:37.815817412 +0000 UTC m=+0.057624910 container create 19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lichterman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:48:37 compute-0 systemd[1]: Started libpod-conmon-19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952.scope.
Jan 31 08:48:37 compute-0 podman[264666]: 2026-01-31 08:48:37.785060675 +0000 UTC m=+0.026868203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:37 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:38 compute-0 podman[264666]: 2026-01-31 08:48:38.063057799 +0000 UTC m=+0.304865317 container init 19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lichterman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:48:38 compute-0 ceph-mon[75294]: pgmap v1697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:38 compute-0 podman[264666]: 2026-01-31 08:48:38.07056153 +0000 UTC m=+0.312369038 container start 19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:48:38 compute-0 happy_lichterman[264680]: 167 167
Jan 31 08:48:38 compute-0 podman[264666]: 2026-01-31 08:48:38.080557029 +0000 UTC m=+0.322364807 container attach 19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:48:38 compute-0 systemd[1]: libpod-19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952.scope: Deactivated successfully.
Jan 31 08:48:38 compute-0 podman[264666]: 2026-01-31 08:48:38.091249797 +0000 UTC m=+0.333057325 container died 19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9669e983d123d447d457a66dcb2357082efa3ea3f1b094b692eb3ddf45a606f6-merged.mount: Deactivated successfully.
Jan 31 08:48:38 compute-0 podman[264666]: 2026-01-31 08:48:38.194320368 +0000 UTC m=+0.436127866 container remove 19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lichterman, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:48:38 compute-0 systemd[1]: libpod-conmon-19fe518e5fa8a81c598643d0a5f8481073e1bd367d258683f6eda4dadda69952.scope: Deactivated successfully.
Jan 31 08:48:38 compute-0 podman[264705]: 2026-01-31 08:48:38.31720676 +0000 UTC m=+0.025183158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:38 compute-0 podman[264705]: 2026-01-31 08:48:38.425289157 +0000 UTC m=+0.133265525 container create d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:48:38 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:38 compute-0 systemd[1]: Started libpod-conmon-d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6.scope.
Jan 31 08:48:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0e9e07330211f4c423a247fe6d224b416f9fcd74b74ad9a90572c72f26e1c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0e9e07330211f4c423a247fe6d224b416f9fcd74b74ad9a90572c72f26e1c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0e9e07330211f4c423a247fe6d224b416f9fcd74b74ad9a90572c72f26e1c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0e9e07330211f4c423a247fe6d224b416f9fcd74b74ad9a90572c72f26e1c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:38 compute-0 podman[264705]: 2026-01-31 08:48:38.634030598 +0000 UTC m=+0.342006966 container init d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:48:38 compute-0 podman[264705]: 2026-01-31 08:48:38.640735537 +0000 UTC m=+0.348711905 container start d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:48:38 compute-0 podman[264705]: 2026-01-31 08:48:38.652836863 +0000 UTC m=+0.360813251 container attach d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:48:38 compute-0 great_satoshi[264722]: {
Jan 31 08:48:38 compute-0 great_satoshi[264722]:     "0": [
Jan 31 08:48:38 compute-0 great_satoshi[264722]:         {
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "devices": [
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "/dev/loop3"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             ],
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_name": "ceph_lv0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_size": "21470642176",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=138b43d4-6b22-4784-83a9-3b3a12b6e8dd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "name": "ceph_lv0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "tags": {
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.block_uuid": "k9fXkf-49aO-Cjud-uLUK-KWDK-Jc4K-iidRAl",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cluster_name": "ceph",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.crush_device_class": "",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.encrypted": "0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.objectstore": "bluestore",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osd_fsid": "138b43d4-6b22-4784-83a9-3b3a12b6e8dd",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osd_id": "0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.type": "block",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.vdo": "0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.with_tpm": "0"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             },
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "type": "block",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "vg_name": "ceph_vg0"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:         }
Jan 31 08:48:38 compute-0 great_satoshi[264722]:     ],
Jan 31 08:48:38 compute-0 great_satoshi[264722]:     "1": [
Jan 31 08:48:38 compute-0 great_satoshi[264722]:         {
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "devices": [
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "/dev/loop4"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             ],
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_name": "ceph_lv1",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_size": "21470642176",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d185ab0-8a71-40fb-b34c-388b2e694746,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "name": "ceph_lv1",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "tags": {
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.block_uuid": "cB2wOM-nOWa-Pthd-GmGX-v5MR-fkO7-Lz1w56",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cluster_name": "ceph",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.crush_device_class": "",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.encrypted": "0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.objectstore": "bluestore",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osd_fsid": "4d185ab0-8a71-40fb-b34c-388b2e694746",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osd_id": "1",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.type": "block",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.vdo": "0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.with_tpm": "0"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             },
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "type": "block",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "vg_name": "ceph_vg1"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:         }
Jan 31 08:48:38 compute-0 great_satoshi[264722]:     ],
Jan 31 08:48:38 compute-0 great_satoshi[264722]:     "2": [
Jan 31 08:48:38 compute-0 great_satoshi[264722]:         {
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "devices": [
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "/dev/loop5"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             ],
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_name": "ceph_lv2",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_size": "21470642176",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dc03f344-536f-5591-add9-31059f42637c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39d89c18-9d94-4e5d-ba4b-7f289542d53c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "lv_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "name": "ceph_lv2",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "tags": {
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.block_uuid": "vqXHIH-77hc-YOio-5Cij-VTWw-CAML-Zu6IxI",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cluster_fsid": "dc03f344-536f-5591-add9-31059f42637c",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.cluster_name": "ceph",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.crush_device_class": "",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.encrypted": "0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.objectstore": "bluestore",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osd_fsid": "39d89c18-9d94-4e5d-ba4b-7f289542d53c",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osd_id": "2",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.type": "block",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.vdo": "0",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:                 "ceph.with_tpm": "0"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             },
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "type": "block",
Jan 31 08:48:38 compute-0 great_satoshi[264722]:             "vg_name": "ceph_vg2"
Jan 31 08:48:38 compute-0 great_satoshi[264722]:         }
Jan 31 08:48:38 compute-0 great_satoshi[264722]:     ]
Jan 31 08:48:38 compute-0 great_satoshi[264722]: }
Jan 31 08:48:38 compute-0 systemd[1]: libpod-d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6.scope: Deactivated successfully.
Jan 31 08:48:38 compute-0 podman[264705]: 2026-01-31 08:48:38.969964998 +0000 UTC m=+0.677941366 container died d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0e9e07330211f4c423a247fe6d224b416f9fcd74b74ad9a90572c72f26e1c1-merged.mount: Deactivated successfully.
Jan 31 08:48:39 compute-0 podman[264705]: 2026-01-31 08:48:39.126407303 +0000 UTC m=+0.834383681 container remove d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:39 compute-0 systemd[1]: libpod-conmon-d3fc0bd5766e3c38edac32eaf9ba987d312d40fb826099f1939d7123388ab8e6.scope: Deactivated successfully.
Jan 31 08:48:39 compute-0 sudo[264632]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:39 compute-0 sudo[264743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:39 compute-0 sudo[264743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:39 compute-0 sudo[264743]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:39 compute-0 sudo[264768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dc03f344-536f-5591-add9-31059f42637c/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dc03f344-536f-5591-add9-31059f42637c -- raw list --format json
Jan 31 08:48:39 compute-0 sudo[264768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:48:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3114004017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:48:39 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:48:39 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3114004017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:48:39 compute-0 podman[264804]: 2026-01-31 08:48:39.638097639 +0000 UTC m=+0.048936117 container create 9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:48:39 compute-0 systemd[1]: Started libpod-conmon-9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d.scope.
Jan 31 08:48:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:39 compute-0 podman[264804]: 2026-01-31 08:48:39.618270926 +0000 UTC m=+0.029109424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:39 compute-0 sshd-session[264821]: Accepted publickey for zuul from 192.168.122.10 port 48296 ssh2: ECDSA SHA256:1lsYQXnNS2Ptu0YKqDCCg85E8bfZetXu8NXs77tQFNg
Jan 31 08:48:39 compute-0 systemd-logind[810]: New session 56 of user zuul.
Jan 31 08:48:39 compute-0 systemd[1]: Started Session 56 of User zuul.
Jan 31 08:48:39 compute-0 sshd-session[264821]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:48:39 compute-0 podman[264804]: 2026-01-31 08:48:39.816624227 +0000 UTC m=+0.227462725 container init 9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:48:39 compute-0 podman[264804]: 2026-01-31 08:48:39.823227745 +0000 UTC m=+0.234066223 container start 9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:48:39 compute-0 affectionate_feynman[264823]: 167 167
Jan 31 08:48:39 compute-0 systemd[1]: libpod-9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d.scope: Deactivated successfully.
Jan 31 08:48:39 compute-0 sudo[264831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 08:48:39 compute-0 sudo[264831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:48:39 compute-0 podman[264804]: 2026-01-31 08:48:39.995192958 +0000 UTC m=+0.406031456 container attach 9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:48:39 compute-0 podman[264804]: 2026-01-31 08:48:39.996332289 +0000 UTC m=+0.407170767 container died 9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:48:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55ed02712ede0f40f6b6fe952d3385395523a77d459f1de0a1ca1aa3b2612d0-merged.mount: Deactivated successfully.
Jan 31 08:48:40 compute-0 ceph-mon[75294]: pgmap v1698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3114004017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:48:40 compute-0 ceph-mon[75294]: from='client.? 192.168.122.10:0/3114004017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:48:40 compute-0 podman[264804]: 2026-01-31 08:48:40.196045237 +0000 UTC m=+0.606883715 container remove 9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:48:40 compute-0 podman[264881]: 2026-01-31 08:48:40.336641437 +0000 UTC m=+0.024835369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:40 compute-0 podman[264881]: 2026-01-31 08:48:40.466584919 +0000 UTC m=+0.154778821 container create c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:48:40 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:40 compute-0 systemd[1]: libpod-conmon-9298ae7e2f2fa0a30386a96d651e8a9b34ca74dccddf07c2b0f6455df932ce8d.scope: Deactivated successfully.
Jan 31 08:48:40 compute-0 systemd[1]: Started libpod-conmon-c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644.scope.
Jan 31 08:48:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb021db416a809bf5bf9dce39f60a4296e1001f22b6a764742716d9dc306ead9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb021db416a809bf5bf9dce39f60a4296e1001f22b6a764742716d9dc306ead9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb021db416a809bf5bf9dce39f60a4296e1001f22b6a764742716d9dc306ead9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb021db416a809bf5bf9dce39f60a4296e1001f22b6a764742716d9dc306ead9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:40 compute-0 podman[264881]: 2026-01-31 08:48:40.922406023 +0000 UTC m=+0.610599955 container init c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:48:40 compute-0 podman[264881]: 2026-01-31 08:48:40.929998537 +0000 UTC m=+0.618192429 container start c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_williamson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:48:40 compute-0 podman[264881]: 2026-01-31 08:48:40.976543289 +0000 UTC m=+0.664737201 container attach c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:48:41 compute-0 lvm[265043]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:48:41 compute-0 lvm[265040]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:48:41 compute-0 lvm[265043]: VG ceph_vg1 finished
Jan 31 08:48:41 compute-0 lvm[265040]: VG ceph_vg0 finished
Jan 31 08:48:41 compute-0 lvm[265045]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:48:41 compute-0 lvm[265045]: VG ceph_vg2 finished
Jan 31 08:48:41 compute-0 practical_williamson[264898]: {}
Jan 31 08:48:41 compute-0 systemd[1]: libpod-c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644.scope: Deactivated successfully.
Jan 31 08:48:41 compute-0 systemd[1]: libpod-c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644.scope: Consumed 1.225s CPU time.
Jan 31 08:48:41 compute-0 podman[264881]: 2026-01-31 08:48:41.835071118 +0000 UTC m=+1.523265020 container died c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb021db416a809bf5bf9dce39f60a4296e1001f22b6a764742716d9dc306ead9-merged.mount: Deactivated successfully.
Jan 31 08:48:41 compute-0 podman[264881]: 2026-01-31 08:48:41.967222829 +0000 UTC m=+1.655416731 container remove c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_williamson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:48:41 compute-0 systemd[1]: libpod-conmon-c321a76623ae35e58d5bfe059e33cf8c69b770295cdaf1e4e27dc71449adc644.scope: Deactivated successfully.
Jan 31 08:48:42 compute-0 sudo[264768]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:48:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:48:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:48:42 compute-0 ceph-mon[75294]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:48:42 compute-0 sudo[265093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:48:42 compute-0 sudo[265093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:42 compute-0 sudo[265093]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:42 compute-0 ceph-mon[75294]: pgmap v1699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:48:42 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' 
Jan 31 08:48:42 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:42 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14658 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:42 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:43 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14660 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:43 compute-0 podman[265180]: 2026-01-31 08:48:43.246740395 +0000 UTC m=+0.113257665 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 08:48:43 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 08:48:43 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721051341' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 08:48:44 compute-0 ceph-mon[75294]: pgmap v1700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:44 compute-0 ceph-mon[75294]: from='client.14658 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:44 compute-0 ceph-mon[75294]: from='client.14660 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:44 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3721051341' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 08:48:44 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:46 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:48:46.993 155810 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:48:46.995 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:46 compute-0 ovn_metadata_agent[155805]: 2026-01-31 08:48:46.995 155810 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:47 compute-0 ceph-mon[75294]: pgmap v1701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:47 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:48 compute-0 ceph-mon[75294]: pgmap v1702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:48 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:49 compute-0 ovs-vsctl[265332]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 08:48:49 compute-0 ceph-mon[75294]: pgmap v1703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:50 compute-0 virtqemud[240526]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 08:48:50 compute-0 virtqemud[240526]: hostname: compute-0
Jan 31 08:48:50 compute-0 virtqemud[240526]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 08:48:50 compute-0 virtqemud[240526]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 08:48:50 compute-0 virtqemud[240526]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 08:48:50 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:50 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: cache status {prefix=cache status} (starting...)
Jan 31 08:48:50 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: client ls {prefix=client ls} (starting...)
Jan 31 08:48:50 compute-0 lvm[265669]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:48:50 compute-0 lvm[265669]: VG ceph_vg1 finished
Jan 31 08:48:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Optimize plan auto_2026-01-31_08:48:50
Jan 31 08:48:50 compute-0 ceph-mgr[75591]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:48:50 compute-0 ceph-mgr[75591]: [balancer INFO root] do_upmap
Jan 31 08:48:50 compute-0 ceph-mgr[75591]: [balancer INFO root] pools ['backups', 'vms', 'volumes', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Jan 31 08:48:50 compute-0 ceph-mgr[75591]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:48:50 compute-0 lvm[265680]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:48:50 compute-0 lvm[265680]: VG ceph_vg2 finished
Jan 31 08:48:50 compute-0 lvm[265684]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:48:50 compute-0 lvm[265684]: VG ceph_vg0 finished
Jan 31 08:48:51 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14664 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:51 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 08:48:51 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 08:48:51 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14666 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:51 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 08:48:51 compute-0 ceph-mon[75294]: pgmap v1704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:51 compute-0 ceph-mon[75294]: from='client.14664 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:51 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 08:48:52 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 08:48:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 31 08:48:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457797433' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 08:48:52 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14670 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:52 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 08:48:52 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 08:48:52 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:48:52 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274320391' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:52 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 08:48:52 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14674 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:52 compute-0 ceph-mgr[75591]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:48:52 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: 2026-01-31T08:48:52.692+0000 7f9067f84640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:48:52 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: ops {prefix=ops} (starting...)
Jan 31 08:48:52 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:52 compute-0 ceph-mon[75294]: from='client.14666 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:52 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3457797433' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 08:48:52 compute-0 ceph-mon[75294]: from='client.14670 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:52 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1274320391' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 31 08:48:53 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3800331209' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 31 08:48:53 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754442526' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: session ls {prefix=session ls} (starting...)
Jan 31 08:48:53 compute-0 ceph-mds[96942]: mds.cephfs.compute-0.xdvglw asok_command: status {prefix=status} (starting...)
Jan 31 08:48:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 31 08:48:53 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758230679' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 08:48:53 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685725593' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: pgmap v1705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:53 compute-0 ceph-mon[75294]: from='client.14674 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3800331209' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/754442526' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2758230679' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 08:48:53 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1685725593' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14684 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:54 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 08:48:54 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285950901' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14688 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:54 compute-0 ceph-mgr[75591]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:54 compute-0 ceph-mon[75294]: from='client.14684 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:54 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1285950901' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:48:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 08:48:55 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328372524' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:48:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 31 08:48:55 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218082289' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 08:48:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 08:48:55 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715702655' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:48:55 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 31 08:48:55 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490013225' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:48:55 compute-0 ceph-mgr[75591]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:48:56 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14700 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:56 compute-0 ceph-mgr[75591]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 08:48:56 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: 2026-01-31T08:48:56.300+0000 7f9067f84640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 08:48:56 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:56 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 08:48:56 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594231672' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 08:48:56 compute-0 ceph-mon[75294]: pgmap v1706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:56 compute-0 ceph-mon[75294]: from='client.14688 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:56 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/328372524' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:48:56 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1218082289' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 08:48:56 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1715702655' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:48:56 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/490013225' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 08:48:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 31 08:48:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600507373' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:07.479499+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:08.479709+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:09.479860+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:10.479990+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:11.480108+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:12.480251+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:13.480380+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:14.480567+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:15.480728+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:16.480889+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:17.481020+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:18.481161+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:19.481380+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:20.481568+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:21.481730+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:22.481930+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:23.482100+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:24.482321+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:25.482519+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:26.482765+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:27.482937+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:28.483068+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:29.483190+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:30.483334+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:31.483473+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:32.483621+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:33.483816+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:34.484011+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:35.484093+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:36.484245+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:37.484362+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:38.484510+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:39.484689+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:40.484802+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:41.484929+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:42.485078+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:43.485280+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:44.485433+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:45.485556+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:46.485639+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:47.485767+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:48.485869+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:49.485997+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:50.486123+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:51.486227+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:52.486772+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:53.487206+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:54.487477+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:55.488200+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:56.488331+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:57.488690+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:58.488885+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:59.489397+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:00.489858+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:01.490175+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:02.490513+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:03.490734+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:04.490928+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:05.491098+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:06.491323+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:07.491538+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:08.491782+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:09.491942+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:10.492157+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:11.492334+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:12.492503+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:13.492716+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:14.493016+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:15.493201+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:16.493430+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:17.493590+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:18.493744+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:19.493903+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:20.494029+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:21.494211+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:22.494340+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:23.494473+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 729088 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:24.494743+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 729088 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:25.494980+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:26.495239+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:27.495428+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread fragmentation_score=0.000139 took=0.000033s
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:28.495606+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:29.495783+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:30.495950+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:31.496085+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:32.496209+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:33.496338+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:34.496534+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:35.496727+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:36.496870+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:37.497012+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:38.497176+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:39.497386+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:40.497695+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:41.497914+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:42.498038+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:43.498191+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:44.498370+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:45.498500+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:46.498806+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:47.498950+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:48.499109+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:49.499272+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:50.499449+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:51.499717+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:52.499857+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:53.499996+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:54.500153+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:55.500280+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:56.500413+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:57.500554+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:58.500760+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:59.500892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:00.501038+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:01.501167+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:02.501285+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:03.501457+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:04.501621+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:05.501817+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:06.501955+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:07.502191+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:08.502363+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:09.502579+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:10.502791+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:11.502933+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:12.503104+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:13.503286+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:14.503475+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:15.503611+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:16.503731+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:17.503849+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:18.504000+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:19.504184+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:20.504327+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:21.504452+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:22.504643+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:23.504822+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:24.504999+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:25.505160+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:26.505296+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:27.505413+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:28.505719+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:29.505902+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:30.506014+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:31.506134+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:32.506290+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:33.506482+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:34.506730+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:35.506849+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:36.507029+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:37.507185+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:38.507399+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:39.507620+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:40.507876+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:41.510214+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:42.510366+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:43.510539+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:44.510708+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:45.510907+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:46.511050+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:47.511216+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:48.511340+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:49.511511+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:50.511794+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:51.511977+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:52.512190+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:53.512406+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:54.512623+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:55.512824+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:56.512990+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:57.513141+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:58.513265+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:59.513484+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 5788 writes, 24K keys, 5788 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5788 writes, 912 syncs, 6.35 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c329a9b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:00.513809+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 655360 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:01.514502+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 655360 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:02.515524+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 647168 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:03.516595+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 647168 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:04.516712+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 647168 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:05.518848+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 647168 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:06.519013+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 647168 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:07.519163+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:08.519314+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:09.519450+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:10.519594+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:11.519721+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:12.519865+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:13.519994+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:14.520132+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:15.520282+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:16.520492+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:17.520702+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:18.520933+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:19.521313+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:20.521645+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:21.521771+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:22.522033+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:23.522224+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:24.522428+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:25.522573+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:26.522740+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:27.522948+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:28.523104+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:29.523274+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:30.523464+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:31.523604+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:32.523853+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:33.524072+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:34.524271+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:35.526280+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:36.526820+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:37.527279+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:38.527542+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:39.527930+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:40.528899+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:41.529148+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:42.529681+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:43.530149+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:44.530697+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:45.531322+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:46.531583+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:47.531751+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:48.532337+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:49.532472+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:50.532771+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 630784 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:51.532908+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:52.533041+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:53.533201+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:54.534097+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:55.534242+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:56.534508+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:57.534791+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:58.535033+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:59.535224+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:00.535444+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:01.535937+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:02.536163+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:03.536359+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:04.536805+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:05.536948+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:06.537111+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:07.537263+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:08.537399+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:09.537726+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:10.537859+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:11.538047+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:12.538264+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:13.538549+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:14.538884+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:15.539009+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:16.539154+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:17.539343+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:18.539538+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:19.539707+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:20.539863+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:21.540045+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:22.540220+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:23.540365+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:24.540547+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 08:48:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383153784' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:25.540805+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:26.540915+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:27.541056+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:28.541164+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:29.541266+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:30.541484+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:31.541596+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:32.541701+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:33.541919+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:34.542158+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:35.542370+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:36.542570+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:37.542775+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:38.542906+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:39.543087+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:40.543219+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:41.543354+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 279.861206055s of 281.979461670s, submitted: 22
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:42.543486+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:43.548341+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:44.548588+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:45.548779+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:46.548925+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:47.549060+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:48.549186+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:49.549323+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:50.549497+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:51.549601+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:52.549789+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.242948055s of 10.419075012s, submitted: 54
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:53.550000+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:54.550176+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:55.550337+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:56.550494+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:57.550694+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:58.550872+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:59.551004+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:00.551131+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:01.551266+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:02.551439+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:03.551624+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:04.551811+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:05.551942+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:06.552095+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:07.552197+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:08.552303+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:09.552432+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:10.552562+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:11.552903+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:12.553077+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:13.553196+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:14.553444+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:15.553644+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1474560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:16.553786+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1474560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:17.553978+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1474560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:18.554211+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:19.554513+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:20.554712+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:21.554866+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:22.555017+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:23.555132+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:24.555292+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:25.555475+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:26.555677+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:27.555851+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:28.555968+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:29.556110+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:30.556343+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:31.556588+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:32.556892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:33.557123+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:34.557409+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:35.557609+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:36.557821+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:37.558012+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:38.558244+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:39.558382+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:40.558523+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:41.558738+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:42.558864+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:43.558985+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:44.559341+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:45.559489+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972366 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32d6ad400
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 handle_osd_map epochs [139,139], i have 137, src has [1,139]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 137 handle_osd_map epochs [138,139], i have 137, src has [1,139]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 51.581363678s of 52.943164825s, submitted: 36
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1187840 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcd8fb/0x19f000, compress 0x0/0x0/0x0, omap 0x12ea6, meta 0x2bbd15a), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:46.559685+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 10289152 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:47.559821+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 10264576 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:48.559950+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 140 ms_handle_reset con 0x55c32d6ad400 session 0x55c32d62fa40
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 10207232 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:49.560101+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32d6ad800
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 10067968 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:50.560235+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010769 data_alloc: 218103808 data_used: 7638
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 10035200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x542c62/0x619000, compress 0x0/0x0/0x0, omap 0x13a38, meta 0x2bbc5c8), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:51.560397+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fca13000/0x0/0x4ffc00000, data 0x542c62/0x619000, compress 0x0/0x0/0x0, omap 0x13a94, meta 0x2bbc56c), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 5332992 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:52.560582+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 9961472 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:53.560710+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 9961472 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:54.560848+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 9936896 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 ms_handle_reset con 0x55c32d6ad800 session 0x55c32bf708c0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:55.560984+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040593 data_alloc: 218103808 data_used: 8223
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc59b000/0x0/0x4ffc00000, data 0x9b627d/0xa8f000, compress 0x0/0x0/0x0, omap 0x14129, meta 0x2bbbed7), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 9936896 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:56.561130+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 9936896 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc59b000/0x0/0x4ffc00000, data 0x9b627d/0xa8f000, compress 0x0/0x0/0x0, omap 0x14129, meta 0x2bbbed7), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:57.561325+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 9928704 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:58.561569+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc59b000/0x0/0x4ffc00000, data 0x9b627d/0xa8f000, compress 0x0/0x0/0x0, omap 0x14129, meta 0x2bbbed7), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 9912320 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:59.561847+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc59b000/0x0/0x4ffc00000, data 0x9b627d/0xa8f000, compress 0x0/0x0/0x0, omap 0x14129, meta 0x2bbbed7), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc59b000/0x0/0x4ffc00000, data 0x9b627d/0xa8f000, compress 0x0/0x0/0x0, omap 0x14129, meta 0x2bbbed7), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 9912320 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:00.562001+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040593 data_alloc: 218103808 data_used: 8223
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 9912320 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc59b000/0x0/0x4ffc00000, data 0x9b627d/0xa8f000, compress 0x0/0x0/0x0, omap 0x14129, meta 0x2bbbed7), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:01.562187+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 9912320 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:02.562405+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 9912320 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32df31c00
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.255562782s of 17.605319977s, submitted: 67
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fc59b000/0x0/0x4ffc00000, data 0x9b627d/0xa8f000, compress 0x0/0x0/0x0, omap 0x14129, meta 0x2bbbed7), peers [0,1] op hist [0,0,1])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:03.562540+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 9715712 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:04.562712+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 9715712 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:05.562828+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043663 data_alloc: 218103808 data_used: 8223
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 9674752 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fc598000/0x0/0x4ffc00000, data 0x9b7e6d/0xa92000, compress 0x0/0x0/0x0, omap 0x14633, meta 0x2bbb9cd), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:06.562996+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:07.563181+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 8568832 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:08.563324+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 143 ms_handle_reset con 0x55c32df31c00 session 0x55c32c1f8000
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:09.563464+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 8642560 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:10.563584+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019147 data_alloc: 218103808 data_used: 8223
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fca0a000/0x0/0x4ffc00000, data 0x547e4a/0x621000, compress 0x0/0x0/0x0, omap 0x1482b, meta 0x2bbb7d5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 8642560 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:11.563738+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32ab59c00
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 8634368 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:12.563876+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 8626176 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:13.564026+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.708085537s of 10.409974098s, submitted: 69
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:14.564162+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fca05000/0x0/0x4ffc00000, data 0x54b4b9/0x627000, compress 0x0/0x0/0x0, omap 0x15081, meta 0x2bbaf7f), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:15.564283+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 145 ms_handle_reset con 0x55c32ab59c00 session 0x55c32c2c1340
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001551 data_alloc: 218103808 data_used: 12284
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:16.564398+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdb4b9/0x1b7000, compress 0x0/0x0/0x0, omap 0x1521d, meta 0x2bbade3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:17.564512+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdb4b9/0x1b7000, compress 0x0/0x0/0x0, omap 0x1521d, meta 0x2bbade3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:18.564689+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:19.564850+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdb4b9/0x1b7000, compress 0x0/0x0/0x0, omap 0x1521d, meta 0x2bbade3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:20.564977+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001679 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:21.565090+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:22.565215+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:23.565339+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:24.565489+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:25.565621+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:26.565723+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:27.565854+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:28.566003+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:29.566186+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:30.566314+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:31.566458+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:32.566603+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:33.566747+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:34.566901+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:35.567037+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:36.567184+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:37.567310+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:38.567423+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:39.567563+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:40.567717+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:41.567935+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:42.568055+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:43.568212+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:44.568369+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:45.568502+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:46.568693+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:47.568810+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:48.568939+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:49.569101+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:50.569246+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:51.569415+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:52.569547+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:53.569692+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:54.569858+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:55.570002+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:56.570132+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:57.570278+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:58.570417+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:59.570552+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:00.570725+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:01.570870+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:02.571182+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:03.571384+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:04.571611+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:05.571762+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:06.571916+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:07.572044+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:08.572405+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:09.572790+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:10.572928+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:11.573054+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:12.573185+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:13.573326+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:14.573547+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:15.573724+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:16.573874+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:17.574008+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:18.574144+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:19.574301+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:20.574446+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:21.574593+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:22.574742+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:23.574861+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:24.575022+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:25.575148+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:26.575281+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:27.575420+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:28.575544+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:29.575703+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:30.575817+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:31.575946+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:32.576063+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:33.576222+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:34.576454+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:35.576601+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:36.576778+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 8609792 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:37.576988+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:38.577151+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:39.577290+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:40.577465+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:41.577628+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:42.577868+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:43.578130+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:44.578319+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:45.578466+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:46.578596+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:47.578741+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:48.578922+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:49.579094+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:50.579206+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:51.579351+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:52.579487+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:53.579709+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:54.579909+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:55.580068+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:56.580205+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:57.580337+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 8593408 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:58.580470+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:59.580617+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:00.580715+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:01.581261+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:02.581357+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:03.581478+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:04.582355+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:05.582507+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:06.582634+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:07.582800+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:08.582912+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:09.583053+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:10.583192+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:11.583393+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:12.583704+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:13.583966+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:14.584254+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:15.584465+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:16.584698+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:17.584865+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 8577024 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:18.585014+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:19.585159+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:20.585342+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:21.585522+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:22.585700+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:23.585858+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:24.586091+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:25.586228+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:26.586384+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:27.586539+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:28.586698+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:29.586843+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:30.587032+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:31.587190+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:32.587384+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:33.587674+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:34.587988+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:35.588157+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:36.588341+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:37.588515+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 8560640 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:38.588882+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:39.589024+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:40.589171+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:41.589324+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:42.589549+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:43.589755+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:44.589976+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:45.590151+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:46.590313+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:47.590460+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:48.590607+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:49.590775+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:50.590913+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:51.591318+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:52.591554+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:53.591760+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:54.591961+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:55.592238+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:56.592440+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:57.592604+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 8544256 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:58.592825+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:59.592939+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:00.593110+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:01.593303+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:02.593529+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:03.593716+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:04.593888+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:05.594142+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:06.594341+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:07.594614+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:08.594825+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:09.594998+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:10.595223+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:11.595441+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:12.595676+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:13.595892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:14.596108+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:15.596308+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:16.596519+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:17.596726+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 8527872 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:18.596943+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:19.597093+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:20.597326+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:21.597526+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:22.597687+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:23.597852+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:24.598079+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:25.598255+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:26.598413+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:27.598555+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:28.598786+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:29.598985+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:30.599173+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:31.599337+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:32.599488+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:33.599704+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:34.599887+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:35.600027+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:36.600251+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:37.600400+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 8511488 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:38.600598+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:39.600764+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:40.600902+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:41.601018+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:42.601140+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:43.601284+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:44.601460+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:45.601689+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:46.601899+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:47.602053+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:48.602195+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:49.602335+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:50.602478+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:51.602639+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:52.602845+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:53.603020+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:54.603236+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:55.603408+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:56.603521+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:57.603782+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 8495104 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:58.603984+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:59.604208+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:00.604409+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:01.604531+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:02.604712+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:03.604847+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:04.605056+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:05.605243+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:06.605391+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 8478720 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:07.605517+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:08.605616+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:09.605721+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:10.605884+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:11.606035+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:12.606800+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:13.606938+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:14.607080+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:15.607224+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:16.607345+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:17.607511+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 8470528 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:18.607724+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 8454144 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:19.607888+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 8454144 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:20.608034+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 8454144 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:21.608160+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 8454144 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:22.608300+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 8454144 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:23.608493+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 8454144 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:24.608730+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 8454144 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:25.608885+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 8437760 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:26.609043+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 8437760 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:27.609203+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 8437760 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:28.609324+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 8437760 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:29.609438+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 8437760 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:30.609620+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 8437760 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:31.609818+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 8437760 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:32.610017+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 8429568 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:33.610160+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 8429568 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:34.610789+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 8429568 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:35.611276+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 8429568 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:36.611478+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 8429568 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:37.611688+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 8429568 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:38.612202+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:39.612626+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:40.612986+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:41.613196+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:42.613516+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:43.613811+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:44.614160+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:45.614293+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:46.614569+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:47.614728+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:48.614869+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:49.615121+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:50.615301+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:51.615491+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:52.615622+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:53.615794+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:54.616076+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:55.616202+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:56.616366+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:57.616523+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76390400 unmapped: 8413184 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:58.616710+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:59.616859+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:00.617042+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:01.617221+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:02.617385+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:03.617566+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:04.617771+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:05.617998+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:06.618189+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:07.618357+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:08.618509+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:09.618707+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:10.618920+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 8396800 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:11.619080+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 8388608 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:12.619231+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 8388608 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:13.619426+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 8388608 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:14.619931+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 8388608 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:15.620072+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 8388608 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:16.620265+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 8388608 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:17.620397+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 8388608 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:18.620607+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 8372224 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:19.620741+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 8372224 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:20.620886+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:21.621149+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:22.621314+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:23.621540+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:24.621744+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:25.621883+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:26.622013+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:27.622217+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:28.622391+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:29.622528+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:30.622732+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:31.622904+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:32.623059+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:33.623202+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:34.623485+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:35.623680+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:36.623839+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:37.624008+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 8364032 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:38.624131+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:39.624273+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:40.624448+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:41.624640+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:42.624936+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:43.625233+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:44.625392+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:45.625584+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:46.625804+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:47.625989+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:48.626136+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 8347648 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:49.626282+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:50.626558+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:51.626803+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:52.627001+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:53.627192+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:54.627445+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:55.627717+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:56.627867+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:57.628066+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 8339456 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:58.628234+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:59.628359+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:00.628479+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:01.628629+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:02.628824+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:03.628968+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:04.629147+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:05.629330+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:06.629507+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:07.629718+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:08.629918+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:09.630113+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:10.630284+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:11.630456+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:12.630689+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:13.630865+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:14.631070+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:15.631225+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:16.631369+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:17.631532+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:18.631735+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:19.632009+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:20.632233+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:21.632407+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:22.632579+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:23.632758+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:24.632967+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:25.633118+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:26.633300+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:27.633576+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:28.633903+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:29.634076+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:30.634275+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:31.634469+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:32.634789+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:33.634951+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:34.635109+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:35.635259+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:36.635398+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:37.635545+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:38.635706+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:39.635846+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:40.635998+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:41.636143+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:42.636306+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:43.636519+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:44.636756+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:45.636872+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:46.637016+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 8323072 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:47.637170+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:48.637308+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:49.637447+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:50.637577+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:51.637727+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:52.637861+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:53.637977+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:54.638155+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:55.638288+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:56.638494+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:57.638660+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:58.638859+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:59.639029+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 6273 writes, 25K keys, 6273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6273 writes, 1120 syncs, 5.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 485 writes, 1161 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.55 MB, 0.00 MB/s
                                           Interval WAL: 485 writes, 208 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:00.639167+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:01.639286+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:02.639403+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:03.639601+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:04.639807+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:05.640057+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:06.640300+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:07.640567+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 8314880 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: mgrc ms_handle_reset ms_handle_reset con 0x55c32b3e2000
Jan 31 08:48:57 compute-0 ceph-osd[88061]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3272136490
Jan 31 08:48:57 compute-0 ceph-osd[88061]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3272136490,v1:192.168.122.100:6801/3272136490]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: get_auth_request con 0x55c32df31c00 auth_method 0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:08.640837+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:09.641021+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:10.641237+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:11.641472+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:12.641681+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:13.641880+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:14.642228+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:15.642689+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:16.642842+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 7987200 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:17.643041+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:18.643289+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:19.643693+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:20.644068+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:21.644398+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005173 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdcf38/0x1ba000, compress 0x0/0x0/0x0, omap 0x1551d, meta 0x2bbaae3), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:22.644744+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:23.645000+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:24.645216+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:25.645376+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32b644800
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 430.830963135s of 432.055694580s, submitted: 51
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 7979008 heap: 84803584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:26.645499+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 24723456 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 146 handle_osd_map epochs [146,147], i have 147, src has [1,147]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 147 ms_handle_reset con 0x55c32b644800 session 0x55c32db90c40
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074752 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:27.645627+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32d6ad400
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 23887872 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fc1fd000/0x0/0x4ffc00000, data 0xd4ead4/0xe2d000, compress 0x0/0x0/0x0, omap 0x15b89, meta 0x2bba477), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:28.645876+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 23740416 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 147 handle_osd_map epochs [147,148], i have 148, src has [1,148]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 ms_handle_reset con 0x55c32d6ad400 session 0x55c32e001880
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:29.646037+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:30.646169+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:31.646326+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124138 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:32.646582+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:33.646782+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb9f8000/0x0/0x4ffc00000, data 0x15506a3/0x1632000, compress 0x0/0x0/0x0, omap 0x161f5, meta 0x2bb9e0b), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:34.647054+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb9f8000/0x0/0x4ffc00000, data 0x15506a3/0x1632000, compress 0x0/0x0/0x0, omap 0x161f5, meta 0x2bb9e0b), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:35.647256+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:36.647501+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124138 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:37.647727+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb9f8000/0x0/0x4ffc00000, data 0x15506a3/0x1632000, compress 0x0/0x0/0x0, omap 0x161f5, meta 0x2bb9e0b), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:38.647879+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb9f8000/0x0/0x4ffc00000, data 0x15506a3/0x1632000, compress 0x0/0x0/0x0, omap 0x161f5, meta 0x2bb9e0b), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:39.648109+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 21643264 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32b644400
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.793551445s of 14.645915985s, submitted: 40
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:40.648238+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 21487616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:41.648418+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fb9fa000/0x0/0x4ffc00000, data 0x15506a3/0x1632000, compress 0x0/0x0/0x0, omap 0x16475, meta 0x2bb9b8b), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 148 handle_osd_map epochs [148,149], i have 149, src has [1,149]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 21487616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126624 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:42.648639+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 21471232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:43.648852+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 21659648 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb9f7000/0x0/0x4ffc00000, data 0x1552293/0x1635000, compress 0x0/0x0/0x0, omap 0x16812, meta 0x2bb97ee), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:44.649086+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 21659648 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 149 ms_handle_reset con 0x55c32b644400 session 0x55c32e3648c0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:45.649275+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 21659648 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:46.649434+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 21659648 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062556 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:47.649611+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32dd5c800
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 21512192 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:48.649764+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0x8e2293/0x9c5000, compress 0x0/0x0/0x0, omap 0x16b66, meta 0x2bb949a), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 20414464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:49.649944+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 20357120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 150 ms_handle_reset con 0x55c32dd5c800 session 0x55c32db93340
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:50.650111+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:51.650270+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce65000/0x0/0x4ffc00000, data 0xe3e88/0x1c6000, compress 0x0/0x0/0x0, omap 0x170c3, meta 0x2bb8f3d), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023433 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:52.650412+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:53.650545+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:54.650685+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce65000/0x0/0x4ffc00000, data 0xe3e88/0x1c6000, compress 0x0/0x0/0x0, omap 0x170c3, meta 0x2bb8f3d), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:55.650812+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce65000/0x0/0x4ffc00000, data 0xe3e88/0x1c6000, compress 0x0/0x0/0x0, omap 0x170c3, meta 0x2bb8f3d), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:56.650943+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023433 data_alloc: 218103808 data_used: 12897
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:57.651224+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:58.651490+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fce65000/0x0/0x4ffc00000, data 0xe3e88/0x1c6000, compress 0x0/0x0/0x0, omap 0x170c3, meta 0x2bb8f3d), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.727752686s of 18.762632370s, submitted: 104
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:59.651926+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:00.652278+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:01.652447+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:02.652713+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:03.652857+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:04.653167+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:05.653385+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:06.653611+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:07.653755+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:08.653916+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:09.654063+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:10.654300+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:11.654471+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:12.654635+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:13.654900+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:14.660418+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:15.660606+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:16.660716+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:17.660884+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:18.661032+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:19.661219+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:20.661456+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:21.661720+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:22.661985+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:23.662163+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:24.662396+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:25.662543+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:26.662722+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:27.662928+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:28.663148+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:29.663316+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:30.663497+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:31.663610+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:32.663798+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:33.664024+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:34.664204+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:35.664343+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:36.664617+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:37.664799+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026863 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:38.665013+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:39.665184+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:40.665377+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:41.665510+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 20447232 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.865966797s of 43.191699982s, submitted: 13
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:42.665708+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 20439040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:43.665828+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:44.666062+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 20381696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:45.666192+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:46.666324+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:47.666491+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:48.666603+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:49.666772+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:50.666893+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:51.667067+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:52.667287+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:53.667528+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:54.667800+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:55.667963+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:56.668219+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:57.668450+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:58.668703+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:59.668912+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:00.669050+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:01.669247+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:02.669421+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:03.669698+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 20389888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:04.669910+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.954086304s of 22.843032837s, submitted: 90
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 20520960 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:05.670120+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:06.670261+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 19406848 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:07.670412+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 19406848 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:08.670604+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 19406848 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:09.670854+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 19406848 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:10.671106+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 19406848 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:11.671325+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:12.671600+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:13.671874+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:14.672138+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:15.672298+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:16.672527+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:17.672841+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:18.673106+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:19.673297+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:20.673422+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:21.673584+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:22.673793+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:23.673933+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:24.674207+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:25.674345+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:26.674483+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:27.674719+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:28.674933+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:29.675098+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:30.679530+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:31.679707+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:32.679866+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:33.680009+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:34.680304+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:35.680475+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:36.680642+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:37.680868+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:38.681011+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:39.682824+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:40.682972+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:41.683124+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:42.683274+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:43.683414+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:44.683578+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:45.683725+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:46.683892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:47.684088+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:48.684292+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:49.684488+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:50.684755+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:51.684920+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:52.685101+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:53.685305+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:54.685544+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:55.685727+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:56.685948+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:57.686214+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:58.686388+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:59.686604+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:00.686814+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:01.686998+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:02.687219+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:03.687457+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:04.687909+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:05.688068+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:06.688290+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:07.688507+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:08.688718+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:09.688866+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:10.689064+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:11.689249+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:12.689442+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:13.689632+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:14.689892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:15.689990+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:16.690224+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:17.690563+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:18.690779+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:19.691008+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:20.691250+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:21.691415+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:22.691611+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:23.691790+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:24.692075+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:25.692284+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:26.692606+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:27.692857+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:28.693048+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:29.693284+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:30.693600+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:31.693813+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:32.694077+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:33.694248+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:34.694418+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:35.694574+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:36.694742+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:37.694925+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:38.695135+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:39.695301+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:40.695471+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:41.695712+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:42.695889+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:43.696042+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:44.696231+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:45.696359+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:46.696499+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:47.696616+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:48.696764+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:49.697115+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:50.697296+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:51.697496+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:52.697628+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:53.697770+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:54.697934+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:55.698133+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:56.698287+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:57.698419+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:58.698575+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:59.698715+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:00.698882+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:01.699047+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:02.699200+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:03.699354+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:04.699541+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:05.699716+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:06.699890+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:07.700111+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:08.700302+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:09.700467+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:10.700708+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:11.700873+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:12.701006+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:13.701155+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:14.701382+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:15.701567+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:16.701836+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:17.702030+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:18.702195+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:19.702335+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:20.702457+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:21.702609+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:22.702764+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:23.702944+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:24.703133+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:25.703257+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:26.703438+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:27.703589+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:28.703763+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:29.703904+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:30.704077+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:31.704222+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:32.704431+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:33.704597+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:34.704724+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:35.704907+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:36.705053+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:37.705237+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:38.705368+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:39.705488+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:40.705692+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:41.705849+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:42.705984+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:43.706130+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:44.706282+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:45.706454+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:46.706591+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:47.706737+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:48.706912+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:49.707046+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:50.707197+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:51.707334+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:52.707440+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:53.707590+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:54.707795+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:55.707913+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:56.708053+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 19382272 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:57.708189+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:58.708316+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:59.708758+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:00.708898+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:01.709117+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:02.709308+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:03.709510+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:04.709728+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:05.709961+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:06.710151+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:07.710356+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:08.710520+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:09.710721+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:10.710869+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:11.711011+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:12.711193+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:13.711624+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:14.711979+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:15.712117+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:16.712277+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:17.712416+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:18.712556+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:19.712743+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:20.712966+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:21.713103+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:22.713338+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:23.713495+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:24.713729+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:25.713864+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:26.714055+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:27.714260+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:28.714407+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:29.714560+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:30.714790+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:31.714969+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:32.715073+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:33.715199+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:34.715394+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:35.715565+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:36.716477+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:37.716797+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:38.717393+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:39.717679+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:40.718103+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:41.719443+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:42.720750+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:43.720960+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:44.721498+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:45.721840+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:46.722030+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:47.722210+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:48.722371+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:49.722556+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:50.722714+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:51.722987+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:52.723305+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:53.723498+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:54.723772+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:55.725720+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:56.725961+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:57.726226+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:58.726411+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:59.726745+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:00.726936+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:01.727203+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:02.730147+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:03.730360+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:04.730556+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:05.730861+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:06.731060+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:07.731223+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:08.731378+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:09.731547+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:10.731704+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:11.731926+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:12.732072+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:13.732208+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:14.732423+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:15.732631+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:16.732863+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:17.733065+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:18.733308+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:19.733477+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:20.733681+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:21.733863+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:22.734004+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:23.734149+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:24.734304+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:25.734437+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:26.734776+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:27.735595+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:28.735795+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:29.735932+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:30.736144+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:31.736293+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:32.736442+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:33.736720+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:34.736933+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:35.737061+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:36.737214+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:37.856684+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:38.856826+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:39.857003+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:40.857191+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:41.857362+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:42.857501+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:43.857626+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:44.857838+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:45.858125+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:46.858330+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:47.858474+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:48.858603+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:49.858729+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:50.858864+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:51.858991+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 19349504 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:52.859130+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 19349504 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:53.859289+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 19349504 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:54.859526+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 19349504 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:55.859700+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 19349504 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:56.859887+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 19349504 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:57.860079+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 19349504 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:58.860287+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:59.860454+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:00.860589+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:01.860750+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:02.860879+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:03.861042+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:04.861267+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:05.861422+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:06.861566+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:07.861711+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:08.861923+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:09.862102+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:10.862236+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:11.862367+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:12.862496+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:13.862635+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:14.862862+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:15.863018+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:16.863158+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:17.865282+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:18.865491+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:19.865627+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:20.865820+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:21.866026+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:22.866175+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:23.866387+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:24.866557+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:25.866709+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:26.866881+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:27.867023+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:28.867227+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:29.867387+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:30.867572+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:31.867764+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:32.867919+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:33.868133+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:34.868340+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:35.868562+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 19333120 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:36.868732+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:37.869077+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:38.869235+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:39.869391+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:40.869591+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:41.869738+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:42.869869+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:43.870050+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:44.870470+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:45.870669+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:46.870924+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:47.871092+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:48.871297+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:49.871546+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:50.871705+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:51.871890+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:52.872075+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:53.872253+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:54.872502+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:55.872720+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:56.872886+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:57.873114+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:58.873286+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:59.873534+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:00.873733+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:01.873908+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:02.874086+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:03.874317+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:04.874495+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:05.874718+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:06.874941+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:07.875115+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:08.875308+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:09.875465+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:10.875619+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:11.875761+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:12.875895+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 19316736 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:13.876042+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:14.876225+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:15.876418+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:16.876586+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:17.876716+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:18.876917+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:19.877138+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:20.877279+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:21.877488+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 19300352 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:22.877740+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:23.877960+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:24.878200+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:25.878386+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:26.879992+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:27.880235+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:28.880383+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:29.880522+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:30.880721+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:31.880890+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 19447808 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:32.881015+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:33.881156+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:34.881307+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:35.881455+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:36.881620+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:37.881840+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:38.882005+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:39.882219+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:40.882435+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:41.882601+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:42.882709+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:43.882910+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:44.883113+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:45.883300+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:46.883590+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:47.883807+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 19439616 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:48.883945+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:49.884156+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:50.884345+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:51.884516+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:52.884744+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:53.884941+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:54.885142+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:55.885362+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:56.885510+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:57.885729+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:58.885935+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:59.886095+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:00.886321+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:01.886490+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:02.886754+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:03.887018+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:04.887191+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:05.887362+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:06.887596+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:07.887756+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:08.887937+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:09.888075+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:10.888206+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:11.888355+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:12.888491+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:13.888736+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:14.888888+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:15.889022+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:16.889170+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:17.889373+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:18.889496+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:19.889626+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:20.889795+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:21.889980+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:22.890165+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:23.890315+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:24.890494+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:25.890642+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:26.890860+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:27.891026+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:28.891188+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:29.891320+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:30.891471+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:31.894737+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:32.894864+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 19431424 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:33.895011+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:34.895191+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:35.895366+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:36.895490+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:37.895630+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:38.895812+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:39.895986+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:40.896158+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:41.896358+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:42.896470+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:43.896638+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:44.896886+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:45.897055+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:46.897262+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:47.897511+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:48.897744+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:49.897917+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:50.898126+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:51.898346+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:52.898622+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 19415040 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:53.898908+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:54.899101+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:55.899351+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:56.899539+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:57.899685+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:58.899885+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:59.900040+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 6771 writes, 26K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6771 writes, 1347 syncs, 5.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 498 writes, 1214 keys, 498 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 498 writes, 227 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:00.900199+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:01.900373+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:02.900544+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:03.900747+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:04.900953+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:05.901148+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:06.901351+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:07.901597+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:08.901745+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:09.901908+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:10.902050+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 19398656 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:11.902242+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:12.902416+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 19390464 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:13.902627+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:14.902875+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:15.903038+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 19374080 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:16.903283+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:17.903474+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:18.903684+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:19.903899+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:20.904030+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:21.904235+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:22.904469+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:23.904645+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:24.904906+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:25.904998+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:26.905141+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 19365888 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:27.907351+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:28.907484+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:29.907611+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:30.907708+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:31.907857+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:32.908011+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 19357696 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:33.908168+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:34.908334+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:35.908434+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:36.908547+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:37.908715+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:38.908829+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:39.908994+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:40.909130+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:41.909298+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:42.912675+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:43.912875+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:44.913361+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:45.913538+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:46.913723+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:47.913851+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:48.913972+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:49.914153+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:50.914397+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:51.914590+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 19341312 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:52.914755+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:53.914927+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:54.915110+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:55.915272+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:56.916039+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:57.916454+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:58.917175+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:59.917798+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:00.918223+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:01.918555+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:02.918901+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:03.919126+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:04.919490+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:05.919954+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:06.920192+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:07.920322+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:08.920457+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:09.921077+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:10.921256+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:11.921520+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 19324928 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:12.921784+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:13.921943+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:14.922193+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:15.922379+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:16.922682+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:17.922935+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:18.923196+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:19.923393+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:20.923584+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:21.923711+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 19308544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:22.923930+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 19300352 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:23.924212+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 19300352 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:24.924522+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 19300352 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:25.924743+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 19292160 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:26.924975+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 19292160 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:27.925191+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 19292160 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:28.925416+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 19292160 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:29.925559+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 19292160 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:30.925736+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 19292160 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:31.925929+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 19292160 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:32.926079+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:33.926262+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:34.926461+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:35.926735+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:36.926902+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:37.927081+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:38.927252+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:39.927438+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:40.927580+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:41.927719+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 576.374816895s of 577.331726074s, submitted: 24
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:42.927868+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 19275776 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:43.928128+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 19251200 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:44.928318+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 19251200 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:45.928470+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 19251200 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:46.928639+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 19243008 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:47.928825+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 19243008 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:48.929042+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 19234816 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:49.929197+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026215 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 19210240 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:50.929335+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 19202048 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:51.929476+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 19193856 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:52.929642+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.150415897s of 10.604299545s, submitted: 82
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:53.929940+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:54.930172+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:55.930367+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:56.930530+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:57.930713+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:58.930852+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:59.930985+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:00.931155+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:01.931298+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:02.931535+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:03.931724+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:04.931970+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:05.932127+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:06.932347+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:07.932617+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:08.932859+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:09.933014+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:10.933231+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:11.933455+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:12.933722+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:13.937137+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:14.937461+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:15.937764+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:16.938000+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:17.938201+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:18.938359+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:19.938549+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:20.938757+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:21.938955+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:22.939145+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:23.939308+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:24.939487+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:25.939690+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:26.939833+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:27.940002+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:28.940194+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:29.940391+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:30.940559+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:31.940752+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:32.940927+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:33.941061+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:34.941200+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:35.941342+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:36.941469+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:37.941616+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:38.941751+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:39.941897+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:40.942022+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:41.942154+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:42.942322+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:43.942524+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:44.942762+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:45.942899+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:46.943137+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:47.943287+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:48.943418+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:49.943555+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:50.943692+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:51.943815+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:52.943986+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:53.944164+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:54.944370+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:55.944517+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:56.944706+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:57.945014+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:58.945156+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:59.945278+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:00.945408+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:01.945530+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:02.945674+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:03.945824+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:04.946003+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:05.946149+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:06.946293+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:07.946488+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:08.946693+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:09.946892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:10.947078+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:11.947227+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:12.947364+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:13.947457+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:14.947618+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:15.947769+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:16.947884+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:17.948057+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:18.948230+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:19.948390+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:20.948516+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:21.948706+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:22.948841+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:23.948988+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:24.949237+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:25.949420+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:26.949568+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:27.949712+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:28.949895+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:29.950046+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:30.950204+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:31.950405+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:32.950561+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 19161088 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:33.950769+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:34.950987+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:35.951133+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:36.951273+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:37.951416+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:38.951622+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:39.951846+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:40.952075+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:41.952393+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:42.952700+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:43.952892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:44.953132+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:45.953298+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:46.953442+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:47.953597+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:48.953777+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:49.953976+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:50.954137+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:51.954329+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:52.954502+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:53.954677+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 19152896 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:54.954855+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:55.955047+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:56.955252+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:57.955487+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:58.955644+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:59.955882+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:00.956081+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:01.956288+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:02.956514+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:03.956740+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:04.956996+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:05.957186+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:06.957326+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:07.957464+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:08.957637+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:09.957886+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:10.958181+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:11.958369+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:12.958558+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:13.958711+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:14.959242+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:15.960744+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:16.960922+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:17.961122+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:18.961257+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:19.961414+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:20.961527+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:21.961686+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:22.961880+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:23.962070+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:24.962296+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:25.962505+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:26.962697+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:27.962871+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:28.963066+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:29.963235+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:30.963400+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:31.963597+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:32.963777+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:33.963956+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:34.964182+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:35.964467+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:36.964682+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:37.964947+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:38.965144+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:39.965327+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:40.965482+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:41.965697+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:42.966471+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:43.966735+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:44.966951+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:45.967182+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:46.967405+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:47.967586+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:48.967867+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:49.968077+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:50.968329+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:51.968581+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:52.970189+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:53.970451+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:54.970756+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:55.970976+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:56.971206+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:57.971478+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:58.971707+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:59.971882+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 19144704 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets getting new tickets!
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:00.972178+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _finish_auth 0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:00.972996+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:01.972380+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:02.972597+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:03.972810+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:04.973006+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:05.973207+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:06.973392+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:07.973588+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:08.973774+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:09.973991+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:10.974183+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:11.974355+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:12.974557+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:13.974750+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:14.975078+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:15.975284+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:16.975440+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:17.975606+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:18.975788+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:19.975937+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:20.976094+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:21.976282+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:22.976449+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:23.976580+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:24.976759+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:25.976919+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:26.977088+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:27.977290+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:28.977455+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:29.977618+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:30.977813+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:31.977980+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:32.978238+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:33.978387+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:34.978559+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:35.979414+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:36.979782+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:37.980174+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:38.980336+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:39.980482+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:40.980683+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:41.980880+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:42.981030+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:43.981195+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:44.981408+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:45.981564+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:46.981789+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:47.982035+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:48.982244+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:49.982491+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:50.982718+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:51.982900+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:52.983082+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:53.983282+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:54.983498+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:55.983683+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:56.983865+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:57.984000+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:58.984153+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:59.984296+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:00.984500+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:01.984729+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:02.984969+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:03.985187+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:04.985455+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:05.985715+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:06.986014+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:07.987051+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:08.987300+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:09.987485+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:10.987865+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:11.988150+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:12.988370+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:13.988528+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:14.988748+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:15.988951+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:16.989164+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:17.989329+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:18.989548+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:19.989770+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:20.989994+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:21.990194+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:22.990415+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:23.990632+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:24.991232+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:25.991467+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:26.991775+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.992069+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.992350+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.992593+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.992837+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.993071+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 19136512 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.993270+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.993488+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.994118+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.994337+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.994528+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.994749+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.404988+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.405259+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.405414+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.405604+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.405749+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.405919+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.406094+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.406294+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.406482+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.406776+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.406954+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.407182+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.407341+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.407546+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 19120128 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.407713+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.407860+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.408051+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.408182+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.408320+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.408449+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.408616+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.408820+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.409165+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.409304+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.409477+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.409602+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.409759+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.410013+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026143 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 19103744 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 313.548492432s of 314.189605713s, submitted: 8
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.410162+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 19079168 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.410320+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 19054592 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.410468+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe5907/0x1c9000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x2bb8c59), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 19030016 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.410616+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32b644400
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 19013632 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.410746+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029662 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 90988544 unmapped: 10600448 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.410891+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 18989056 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.411047+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 151 handle_osd_map epochs [151,152], i have 152, src has [1,152]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 18972672 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.411220+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 ms_handle_reset con 0x55c32b644400 session 0x55c32d6821c0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 18964480 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.411412+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 18964480 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.411545+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 18964480 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.411727+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 18964480 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.411919+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 18964480 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.412082+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.412250+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.412432+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.412580+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.412731+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.412893+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.413088+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.413318+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.413509+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.413681+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.413832+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.413974+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.414142+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.414374+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 18956288 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.414539+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.414729+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.414977+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.415169+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.415380+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.415561+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.415723+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.415865+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.416068+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18939904 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.416244+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.416507+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.417008+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.417282+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.417468+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.417733+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.417887+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.418078+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.418231+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.418397+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.418584+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 18931712 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.421738+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.421966+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.422252+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.422435+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.422630+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.422869+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.423021+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.423220+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.423372+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.423564+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.423710+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.423848+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.424006+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.424144+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.424295+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.424446+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.424627+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.424830+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.425021+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.425215+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 18915328 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.425388+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.425525+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.425745+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.425936+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.426127+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.426321+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.426525+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.426692+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.426908+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.427162+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.427357+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.427554+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.427784+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.427981+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.428307+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.428598+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.428925+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.429123+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.429258+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.429572+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 18898944 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.429831+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.430165+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.430727+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.430892+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.431142+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.431280+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.431685+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.431920+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17990, meta 0x2bb8670), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.432118+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073664 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: handle_auth_request added challenge on 0x55c32dd5dc00
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 92.393150330s of 94.695350647s, submitted: 36
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.432875+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 152 handle_osd_map epochs [152,153], i have 153, src has [1,153]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fc65f000/0x0/0x4ffc00000, data 0x8e74b3/0x9cd000, compress 0x0/0x0/0x0, omap 0x17b68, meta 0x2bb8498), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.433137+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 18882560 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.433395+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 153 ms_handle_reset con 0x55c32dd5dc00 session 0x55c32e324700
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.433633+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe9093/0x1cf000, compress 0x0/0x0/0x0, omap 0x18061, meta 0x2bb7f9f), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:46.433877+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035681 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.434076+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.434217+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.434390+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.434586+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.434716+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe9093/0x1cf000, compress 0x0/0x0/0x0, omap 0x18061, meta 0x2bb7f9f), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035681 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.434839+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 18866176 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fce5b000/0x0/0x4ffc00000, data 0xe9093/0x1cf000, compress 0x0/0x0/0x0, omap 0x18061, meta 0x2bb7f9f), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.615499496s of 10.817469597s, submitted: 39
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.435007+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.435161+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.435409+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.435564+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _renew_subs
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.435698+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.435885+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.436066+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.436200+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.436366+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.436545+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.436771+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.436946+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.437123+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.437246+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.437379+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.437507+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.437727+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.437927+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.438119+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.438281+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 18849792 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.438462+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.438725+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.438985+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.439141+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.439278+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.439440+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.439613+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.439795+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.439988+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.440133+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.440266+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.440493+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.441051+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.441343+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.441733+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.441914+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.442080+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.442326+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.442512+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.442745+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 18833408 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.442976+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.443139+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.443316+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.443489+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.443733+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.443884+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.444032+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.444230+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.444352+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.444556+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.444814+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.445013+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.445225+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.445443+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.445643+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.445833+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.445979+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.446127+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.446321+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.446539+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 18817024 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.446705+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.446886+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.447118+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.447295+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.447500+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.447712+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.447894+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 7109 writes, 27K keys, 7109 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7109 writes, 1507 syncs, 4.72 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 338 writes, 698 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 338 writes, 160 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.448076+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.448287+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.448461+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.448617+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.448914+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.449107+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.449251+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.449397+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.449553+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.449795+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.450013+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.450151+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.450277+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 18800640 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.450434+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.450608+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.451153+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.451899+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.452070+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.452270+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.452426+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.452555+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.452713+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0xeab12/0x1d2000, compress 0x0/0x0/0x0, omap 0x1834b, meta 0x2bb7cb5), peers [0,1] op hist [])
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:48:57 compute-0 ceph-osd[88061]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:48:57 compute-0 ceph-osd[88061]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038455 data_alloc: 218103808 data_used: 16958
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.452873+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 18784256 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.453004+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 18694144 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.453122+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'config diff' '{prefix=config diff}'
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'config show' '{prefix=config show}'
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 18087936 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.453286+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 18284544 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: tick
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_tickets
Jan 31 08:48:57 compute-0 ceph-osd[88061]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:26.453412+0000)
Jan 31 08:48:57 compute-0 ceph-osd[88061]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 18006016 heap: 101588992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:48:57 compute-0 ceph-osd[88061]: do_command 'log dump' '{prefix=log dump}'
Jan 31 08:48:57 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:48:57 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14708 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 31 08:48:57 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785237733' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 08:48:57 compute-0 ceph-mon[75294]: from='client.14700 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:57 compute-0 ceph-mon[75294]: pgmap v1707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:57 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1594231672' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 08:48:57 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1600507373' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 08:48:57 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1383153784' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:48:57 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:58 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14710 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 08:48:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827646805' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:48:58 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:58 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14714 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:58 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} v 0)
Jan 31 08:48:58 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 08:48:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208275483' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: from='client.14708 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1785237733' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: from='client.14710 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3827646805' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:48:59 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14718 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} v 0)
Jan 31 08:48:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:48:59 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 08:48:59 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/599691999' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:49:00 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14722 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:00 compute-0 ceph-mon[75294]: pgmap v1708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:00 compute-0 ceph-mon[75294]: from='client.14714 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:00 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/208275483' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:49:00 compute-0 ceph-mon[75294]: from='client.14718 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:00 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:49:00 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/599691999' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:49:00 compute-0 ceph-mon[75294]: from='client.14722 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 08:49:00 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343924093' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:49:00 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:00 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14726 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:00 compute-0 crontab[266940]: (root) LIST (root)
Jan 31 08:49:00 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 08:49:00 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958255045' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:49:01 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14730 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:01 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/343924093' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:49:01 compute-0 ceph-mon[75294]: pgmap v1709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:01 compute-0 ceph-mon[75294]: from='client.14726 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:01 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/958255045' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:49:01 compute-0 ceph-mon[75294]: from='client.14730 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:01 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 31 08:49:01 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894993142' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 08:49:01 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14734 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:07.045437+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:08.045581+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:09.045775+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:10.045951+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:11.046069+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:12.046216+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:13.046327+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:14.046483+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:15.046597+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:16.046734+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:17.046860+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:18.047020+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:19.047182+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:20.047339+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:21.047523+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:22.047703+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:23.047842+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:24.048033+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:25.048258+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:26.048426+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:27.048567+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:28.048720+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:29.048857+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:30.049016+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:31.049152+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:32.049276+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:33.049364+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:34.049463+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:35.049586+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:36.049724+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:37.049862+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:38.050014+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:39.050174+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:40.050380+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:41.050528+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:42.050714+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:43.050844+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:44.051012+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:45.051168+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:46.051324+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:47.051546+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:48.051715+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:49.051850+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:50.052017+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:51.052135+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:52.052274+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:53.052408+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:54.052723+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:55.053000+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:56.053197+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:57.053381+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:58.053552+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:59.053743+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:00.053970+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:01.054136+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:02.054295+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:03.054636+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:04.054791+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:05.054967+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:06.055133+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:07.055285+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:08.055425+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:09.055566+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:10.055890+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:11.056149+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:12.056336+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:13.056503+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:14.056732+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:15.056931+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:16.057169+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:17.057353+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:18.057573+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:19.057777+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:20.057963+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:21.058133+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:22.058312+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:23.058444+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:24.058555+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:25.058706+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:26.058905+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:27.059149+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread fragmentation_score=0.000119 took=0.000016s
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:28.059296+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:29.059481+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:30.060301+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:31.060508+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:32.060663+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:33.060848+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:34.061071+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:35.061276+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:36.061417+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:37.061570+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:38.061751+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:39.061957+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:40.062120+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:41.062310+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:42.062500+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:43.062697+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:44.062862+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:45.063010+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:46.063192+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:47.063409+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:48.063535+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:49.063719+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:50.064010+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:51.064209+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:52.064402+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:53.064637+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:54.065002+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:55.065274+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:56.065431+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:57.065619+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:58.065799+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:59.066016+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:00.066336+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:01.066540+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:02.066776+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:03.066984+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:04.067127+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:05.067272+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:06.067455+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:07.067739+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:08.067986+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:09.068140+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:10.068384+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:11.068617+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:12.068789+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:13.068946+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:14.069168+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:15.069439+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:16.069705+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:17.069961+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:18.070180+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:19.070378+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:20.070539+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:21.070730+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:22.070976+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:23.071182+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:24.071462+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:25.071686+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:26.071902+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:27.072109+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:28.072289+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:29.072524+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:30.072740+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:31.072968+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:32.073135+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:33.073297+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:34.073451+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:35.073643+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:36.073840+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:37.074042+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:38.074173+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:39.074317+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:40.074503+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:41.074715+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:42.074944+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 1949696 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:43.075172+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1941504 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:44.075328+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1941504 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:45.075464+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1941504 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:46.075598+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 1941504 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:47.075707+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:48.076256+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:49.076478+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:50.076686+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:51.076877+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:52.077056+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:53.077224+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:54.077464+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:55.077702+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 1933312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7214 writes, 29K keys, 7214 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7214 writes, 1459 syncs, 4.94 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a181a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556c5a1818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:56.077963+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:57.078182+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:58.078431+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:59.078606+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:00.078769+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:01.079031+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 1900544 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:02.079192+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:03.079614+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:04.079719+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:05.080136+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:06.081730+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:07.082008+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:08.082561+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:09.083252+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:10.084083+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:11.084229+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 1892352 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:12.084719+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:13.084855+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:14.085015+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:15.085173+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:16.085366+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:17.085557+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:18.085783+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:19.086082+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:20.086737+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:21.086945+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 1884160 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:22.087121+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:23.087264+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:24.087438+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:25.087604+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:26.087816+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:27.087965+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:28.088190+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:29.088344+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:30.088779+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:31.089099+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:32.089375+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:33.089566+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:34.089718+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:35.090062+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:36.090222+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:37.090635+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:38.091955+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:39.093101+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:40.093820+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:41.094378+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:42.094722+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:43.095065+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:44.095575+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:45.096106+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:46.096576+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:47.096976+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 1875968 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:48.097316+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:49.097745+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:50.097985+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:51.098124+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:52.098268+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:53.098434+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:54.098633+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:55.098787+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:56.099735+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:57.099977+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:58.100216+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:59.100471+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:00.100746+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:01.100894+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:02.101098+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:03.101304+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:04.101505+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:05.101739+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:06.101954+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:07.102139+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:08.102351+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:09.102531+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:10.103107+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:11.103628+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 1867776 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:12.103918+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:13.104135+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:14.104267+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:15.104547+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:16.104674+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:17.104805+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:18.105062+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:19.105257+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:20.105497+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:21.105684+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:22.105889+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:23.106034+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:24.106162+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:25.106300+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:26.106493+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:27.106742+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:28.106893+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:29.107019+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:30.107260+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:31.107412+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:32.107636+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:33.107857+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:34.108032+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:35.108229+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:36.108379+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:37.108596+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:38.108738+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:39.108942+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:40.109187+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:41.109389+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:42.109581+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 277.894012451s of 280.371124268s, submitted: 14
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:43.109736+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:44.109935+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 1859584 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:45.110122+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:46.110325+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 1851392 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:47.110498+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [1])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:48.110728+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:49.110927+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 1843200 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:50.111138+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 794624 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:51.111276+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 794624 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:52.111452+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 794624 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.035027027s of 10.048583031s, submitted: 47
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:53.111593+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 794624 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:54.111745+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:55.111875+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043543 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:56.112022+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:57.112249+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:58.112432+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:59.112559+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:00.112706+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:01.112835+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:02.112965+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:03.113095+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:04.113216+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:05.113337+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:06.113485+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:07.113665+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:08.113868+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:09.114018+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:10.114174+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:11.114449+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:12.114583+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:13.114722+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:14.115408+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:15.116256+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:16.116833+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:17.117251+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:18.117408+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:19.117900+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:20.118380+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:21.118630+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:22.118873+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:23.119050+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:24.119214+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:25.119415+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:26.119581+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:27.119834+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:28.120058+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:29.120225+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:30.120468+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:31.120596+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:32.120743+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:33.120899+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:34.121067+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:35.121241+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:36.121452+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:37.121610+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:38.121731+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:39.121867+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:40.122030+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:41.122178+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043471 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:42.122318+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fce19000/0x0/0x4ffc00000, data 0x13db63/0x213000, compress 0x0/0x0/0x0, omap 0x155bb, meta 0x2bbaa45), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:43.122456+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 909312 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f101400
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:44.122724+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.081836700s of 51.762359619s, submitted: 43
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 729088 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:45.122858+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 729088 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:46.123011+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053145 data_alloc: 218103808 data_used: 14104
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 17383424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:47.123323+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:48.123473+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fbe0f000/0x0/0x4ffc00000, data 0x1141312/0x121b000, compress 0x0/0x0/0x0, omap 0x15aa6, meta 0x2bba55a), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 140 ms_handle_reset con 0x556c5f101400 session 0x556c5dfb2e00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:49.123676+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 17301504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f102800
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:50.123885+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 17137664 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:51.124022+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181421 data_alloc: 218103808 data_used: 14123
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 17113088 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:52.124190+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fb60d000/0x0/0x4ffc00000, data 0x1942eda/0x1a1f000, compress 0x0/0x0/0x0, omap 0x15aa6, meta 0x2bba55a), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86654976 unmapped: 16105472 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:53.124365+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86654976 unmapped: 16105472 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb608000/0x0/0x4ffc00000, data 0x1944959/0x1a22000, compress 0x0/0x0/0x0, omap 0x15b9c, meta 0x2bba464), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:54.124509+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.905598640s of 10.236621857s, submitted: 46
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86654976 unmapped: 16105472 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 142 ms_handle_reset con 0x556c5f102800 session 0x556c5bb37500
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:55.124695+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:56.124838+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189557 data_alloc: 218103808 data_used: 14708
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb604000/0x0/0x4ffc00000, data 0x1946518/0x1a26000, compress 0x0/0x0/0x0, omap 0x15c35, meta 0x2bba3cb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:57.125069+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb604000/0x0/0x4ffc00000, data 0x1946518/0x1a26000, compress 0x0/0x0/0x0, omap 0x15c35, meta 0x2bba3cb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:58.125203+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb604000/0x0/0x4ffc00000, data 0x1946518/0x1a26000, compress 0x0/0x0/0x0, omap 0x15c35, meta 0x2bba3cb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:59.125411+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:00.125592+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:01.125715+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189557 data_alloc: 218103808 data_used: 14708
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:02.125948+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 16097280 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:03.126091+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f112400
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb604000/0x0/0x4ffc00000, data 0x1946518/0x1a26000, compress 0x0/0x0/0x0, omap 0x15c35, meta 0x2bba3cb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 15933440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:04.126298+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 15933440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:05.126492+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.674785614s of 10.866004944s, submitted: 4
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 15933440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:06.126636+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191595 data_alloc: 218103808 data_used: 14724
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:07.126823+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb603000/0x0/0x4ffc00000, data 0x1948108/0x1a29000, compress 0x0/0x0/0x0, omap 0x15c35, meta 0x2bba3cb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:08.126966+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:09.127291+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 143 ms_handle_reset con 0x556c5f112400 session 0x556c5e1e56c0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:10.127472+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:11.127628+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110318 data_alloc: 218103808 data_used: 14708
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5e0ffc00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:12.127799+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 15728640 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc604000/0x0/0x4ffc00000, data 0x9480f8/0xa28000, compress 0x0/0x0/0x0, omap 0x15c35, meta 0x2bba3cb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:13.127978+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 15720448 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:14.128148+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 15720448 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:15.128300+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87064576 unmapped: 15695872 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 145 ms_handle_reset con 0x556c5e0ffc00 session 0x556c5e8d1880
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fc5fc000/0x0/0x4ffc00000, data 0x94b734/0xa2c000, compress 0x0/0x0/0x0, omap 0x15d46, meta 0x2bba2ba), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:16.128449+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074371 data_alloc: 218103808 data_used: 14708
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:17.128683+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:18.128977+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:19.129148+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fce00000/0x0/0x4ffc00000, data 0x14b734/0x22c000, compress 0x0/0x0/0x0, omap 0x15d46, meta 0x2bba2ba), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:20.129378+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fce00000/0x0/0x4ffc00000, data 0x14b734/0x22c000, compress 0x0/0x0/0x0, omap 0x15d46, meta 0x2bba2ba), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:21.129573+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074499 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:22.129801+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 145 handle_osd_map epochs [145,146], i have 146, src has [1,146]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.516980171s of 17.126939774s, submitted: 50
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:23.129960+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fce00000/0x0/0x4ffc00000, data 0x14b734/0x22c000, compress 0x0/0x0/0x0, omap 0x15d46, meta 0x2bba2ba), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:24.130128+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:25.130364+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:26.130490+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:27.130787+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:28.130913+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:29.131110+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:30.131343+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:31.131486+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:32.131642+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:33.131817+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:34.131975+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:35.132184+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:36.132454+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:37.132703+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:38.132856+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:39.133079+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:40.133287+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:41.133533+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 15826944 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:42.133800+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:43.134024+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:44.134238+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:45.134409+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:46.134560+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:47.134821+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:48.135038+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:49.135201+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 15958016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:50.135417+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 15949824 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:51.135578+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 15949824 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:52.135753+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 15949824 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:53.135937+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 15949824 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:54.136121+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 15949824 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:55.136327+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 15949824 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:56.136491+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 15949824 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:57.136621+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:58.136821+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:59.136966+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:00.137151+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:01.137361+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:02.137554+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:03.137706+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:04.137845+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:05.137974+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:06.138095+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:07.138231+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:08.138412+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:09.138625+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:10.138883+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:11.139049+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:12.139247+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:13.139426+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:14.139623+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:15.139801+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:16.139982+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 15941632 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:17.140136+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 15933440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:18.140388+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 15933440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:19.140576+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 15933440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:20.140857+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 15933440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:21.141066+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 15925248 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:22.141314+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 15925248 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:23.141500+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 15925248 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:24.141736+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 15925248 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:25.141928+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:26.142095+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:27.142274+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:28.142397+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:29.142768+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:30.142938+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:31.143074+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:32.143297+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:33.143455+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:34.143610+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:35.143814+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:36.144024+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:37.144242+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:38.144417+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:39.144562+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:40.144753+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:41.144885+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:42.145069+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:43.145215+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 15917056 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:44.145350+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:45.145485+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:46.145643+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:47.145886+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:48.146076+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:49.146239+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:50.146450+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:51.146593+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:52.146741+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:53.146874+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:54.147006+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:55.147136+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:56.147264+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:57.147389+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:58.147533+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:59.147698+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:00.147924+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:01.148072+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:02.148316+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:03.148851+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:04.149131+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:05.149300+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:06.149605+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:07.149800+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:08.149956+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 15908864 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:09.150123+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:10.150328+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:11.150530+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:12.150784+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:13.150946+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:14.151150+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:15.151376+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 15900672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:16.151527+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:17.151770+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:18.151960+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:19.152176+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:20.152432+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:21.152696+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:22.152869+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:23.153092+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:24.153324+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:25.153555+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:26.153747+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:27.153982+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:28.154172+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:29.154378+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:30.154745+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:31.154972+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:32.155113+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:33.155313+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:34.155561+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:35.155738+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:36.156490+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:37.156692+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:38.156863+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:39.157018+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:40.157180+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:41.157308+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:42.157533+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:43.157708+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 15892480 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:44.157863+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:45.158009+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:46.158156+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:47.158295+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:48.158446+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:49.158573+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:50.158748+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:51.158937+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:52.159177+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:53.159363+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:54.159503+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:55.159689+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:56.159808+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:57.159918+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:58.160089+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:59.160277+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:00.160554+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:01.160702+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:02.160883+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:03.161056+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:04.161186+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:05.161386+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:06.161623+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:07.161849+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:08.161990+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:09.162218+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:10.162438+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:11.162616+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 15884288 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:12.162819+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:13.163218+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:14.163431+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:15.163576+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:16.163774+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 15876096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:17.163908+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:18.164165+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:19.164378+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:20.164552+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:21.164773+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:22.164941+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:23.165165+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:24.165308+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:25.165511+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:26.165692+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:27.165841+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:28.165940+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:29.166082+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:30.166250+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:31.166372+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:32.166534+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:33.166690+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:34.166870+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:35.167047+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:36.167183+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:37.167406+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:38.167535+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:39.167673+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:40.167831+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:41.167997+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:42.168129+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:43.168259+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:44.168387+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:45.168506+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:46.168729+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:47.168865+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:48.169026+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 15867904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:49.169187+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:50.169360+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:51.169545+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:52.169704+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:53.169856+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:54.170089+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:55.170230+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:56.170386+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:57.170529+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:58.170710+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:59.170855+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:00.171078+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:01.171197+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:02.171328+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:03.171469+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:04.171610+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:05.171727+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:06.171873+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:07.172021+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:08.172163+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:09.172399+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:10.172560+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:11.172686+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:12.172805+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:13.172905+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:14.173043+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:15.173166+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:16.173318+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:17.173476+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:18.173747+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:19.173932+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:20.174136+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:21.174294+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:22.174450+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:23.174587+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:24.174760+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:25.174893+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:26.175057+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:27.175231+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:28.175377+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:29.175553+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:30.175727+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:31.175907+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:32.176037+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:33.176159+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:34.176279+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:35.176932+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:36.177481+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:37.177963+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:38.178157+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:39.178440+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:40.178811+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:41.179185+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:42.179528+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:43.179692+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:44.179989+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:45.180180+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:46.180345+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:47.180558+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:48.180733+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:49.180982+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:50.181225+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:51.181495+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:52.181642+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:53.181771+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:54.181883+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:55.181980+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:56.182100+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:57.182291+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:58.182452+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:59.182717+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:00.182868+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:01.182989+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:02.183147+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:03.183333+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:04.183541+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:05.183703+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:06.183890+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:07.184076+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:08.184315+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:09.184491+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:10.184709+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:11.184855+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:12.185087+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:13.185246+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:14.185448+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:15.185632+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:16.185830+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:17.185977+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:18.186141+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:19.186327+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:20.186535+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:21.186675+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:22.186802+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:23.186978+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:24.187123+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:25.187270+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:26.187468+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:27.187595+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:28.187744+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:29.187869+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:30.188024+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:31.188164+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:32.188276+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:33.188393+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:34.188521+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:35.188712+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:36.188850+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:37.188995+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:38.189155+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:39.189295+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:40.189469+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:41.189708+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:42.189908+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:43.190101+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:44.190249+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:45.190494+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:46.190711+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:47.190875+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:48.191058+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:49.191214+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:50.191401+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 15851520 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:51.191539+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:52.191695+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:53.191845+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:54.191980+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:55.192131+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:56.192323+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:57.192477+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:58.192594+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:59.192766+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:00.193370+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:01.193502+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:02.193625+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:03.193718+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:04.193871+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:05.194001+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:06.194127+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:07.194260+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:08.194417+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:09.194592+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:10.194774+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:11.194916+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:12.195042+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:13.195217+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:14.195358+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:15.195543+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:16.195705+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:17.195850+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:18.196004+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:19.196137+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:20.196299+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:21.196445+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:22.196571+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:23.196730+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:24.196889+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:25.197053+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:26.197198+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:27.197315+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:28.197452+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:29.197674+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:30.197910+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:31.198130+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:32.198250+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:33.198394+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:34.198533+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:35.200614+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:36.201041+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:37.201759+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:38.201875+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:39.202029+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:40.202213+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:41.202399+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:42.202551+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:43.202736+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:44.202894+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:45.203053+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 15843328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:46.203181+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:47.203312+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 15835136 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:48.203447+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:49.203606+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:50.203793+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:51.203979+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:52.204102+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:53.204312+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:54.204445+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:55.204626+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 7550 writes, 30K keys, 7550 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7550 writes, 1591 syncs, 4.75 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 336 writes, 1029 keys, 336 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s
                                           Interval WAL: 336 writes, 132 syncs, 2.55 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:56.204847+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:57.205003+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:58.205145+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:59.205402+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:00.205567+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:01.206008+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:02.206188+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 15966208 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: mgrc ms_handle_reset ms_handle_reset con 0x556c5ebb0c00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3272136490
Jan 31 08:49:01 compute-0 ceph-osd[86929]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3272136490,v1:192.168.122.100:6801/3272136490]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: get_auth_request con 0x556c5f115c00 auth_method 0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:03.206356+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 15728640 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:04.206490+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 15728640 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:05.206764+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 15728640 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 ms_handle_reset con 0x556c5be81800 session 0x556c5c966a80
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f162c00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:06.206897+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 15859712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 ms_handle_reset con 0x556c5c4b9000 session 0x556c5da5c700
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f108c00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 ms_handle_reset con 0x556c5c4b8800 session 0x556c5c496380
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5c4b9000
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:07.207017+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:08.207169+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:09.207318+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:10.207475+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:11.207643+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:12.207807+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:13.207977+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:14.208153+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:15.208712+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:16.208898+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:17.209068+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:18.209232+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:19.209560+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:20.209859+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:21.210037+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:22.210161+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077993 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:23.210387+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fcdfb000/0x0/0x4ffc00000, data 0x14d1b3/0x22f000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:24.210547+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:25.210805+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 15990784 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5e1bec00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 423.101470947s of 423.258819580s, submitted: 14
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:26.210965+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 24272896 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:27.211075+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 147 ms_handle_reset con 0x556c5e1bec00 session 0x556c5f0f3180
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 24248320 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x94d1e6/0xa31000, compress 0x0/0x0/0x0, omap 0x15e6f, meta 0x2bba191), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5e1ba800
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185702 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:28.211196+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:29.211340+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 ms_handle_reset con 0x556c5e1ba800 session 0x556c5f0f2700
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:30.211509+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb981000/0x0/0x4ffc00000, data 0x15c0964/0x16a9000, compress 0x0/0x0/0x0, omap 0x15f08, meta 0x2bba0f8), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:31.211708+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:32.211843+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb981000/0x0/0x4ffc00000, data 0x15c0964/0x16a9000, compress 0x0/0x0/0x0, omap 0x15f08, meta 0x2bba0f8), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196892 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:33.211962+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb981000/0x0/0x4ffc00000, data 0x15c0964/0x16a9000, compress 0x0/0x0/0x0, omap 0x15f08, meta 0x2bba0f8), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:34.212075+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:35.212245+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:36.212380+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:37.212500+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb981000/0x0/0x4ffc00000, data 0x15c0964/0x16a9000, compress 0x0/0x0/0x0, omap 0x15f08, meta 0x2bba0f8), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196892 data_alloc: 218103808 data_used: 15321
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:38.212637+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb981000/0x0/0x4ffc00000, data 0x15c0964/0x16a9000, compress 0x0/0x0/0x0, omap 0x15f08, meta 0x2bba0f8), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:39.212838+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 24068096 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:40.212990+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f103800
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87236608 unmapped: 23920640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.227265358s of 14.738695145s, submitted: 24
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:41.213336+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0x15c0941/0x16a8000, compress 0x0/0x0/0x0, omap 0x15f08, meta 0x2bba0f8), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 23912448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:42.213497+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 23912448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb97f000/0x0/0x4ffc00000, data 0x15c2531/0x16ab000, compress 0x0/0x0/0x0, omap 0x15fa1, meta 0x2bba05f), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:43.213699+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199252 data_alloc: 218103808 data_used: 15337
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 23896064 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:44.213864+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb980000/0x0/0x4ffc00000, data 0x15c2521/0x16aa000, compress 0x0/0x0/0x0, omap 0x15fa1, meta 0x2bba05f), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 23879680 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 149 ms_handle_reset con 0x556c5f103800 session 0x556c5e27fc00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:45.214018+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 23879680 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:46.214166+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 23879680 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc182000/0x0/0x4ffc00000, data 0xdc24fe/0xea9000, compress 0x0/0x0/0x0, omap 0x15fa1, meta 0x2bba05f), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:47.214303+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 23879680 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5e1c2c00
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:48.214514+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157015 data_alloc: 218103808 data_used: 15356
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23740416 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 149 handle_osd_map epochs [149,150], i have 150, src has [1,150]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:49.214704+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:50.214897+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 150 ms_handle_reset con 0x556c5e1c2c00 session 0x556c5db1a1c0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:51.215176+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fcdf1000/0x0/0x4ffc00000, data 0x154103/0x23b000, compress 0x0/0x0/0x0, omap 0x16025, meta 0x2bb9fdb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:52.215335+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fcdf1000/0x0/0x4ffc00000, data 0x154103/0x23b000, compress 0x0/0x0/0x0, omap 0x16025, meta 0x2bb9fdb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:53.215526+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096231 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:54.215713+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fcdf1000/0x0/0x4ffc00000, data 0x154103/0x23b000, compress 0x0/0x0/0x0, omap 0x16025, meta 0x2bb9fdb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:55.215870+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:56.216033+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:57.216186+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:58.216327+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096231 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.519285202s of 18.651802063s, submitted: 56
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:59.216519+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdf1000/0x0/0x4ffc00000, data 0x154103/0x23b000, compress 0x0/0x0/0x0, omap 0x16025, meta 0x2bb9fdb), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:00.216784+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:01.216956+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:02.217113+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:03.217249+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:04.217381+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:05.217505+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:06.217626+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:07.218677+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:08.218861+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:09.219025+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:10.219197+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:11.219318+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:12.219562+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:13.219706+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:14.219863+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:15.220027+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:16.220149+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:17.220275+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:18.220439+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:19.220573+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:20.220839+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:21.220997+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:22.221134+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:23.221306+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:24.221722+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:25.221937+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:26.222142+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:27.222303+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:28.222439+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:29.222742+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:30.222956+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 23904256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 ms_handle_reset con 0x556c5c483800 session 0x556c5bde9180
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5c483800
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:31.223096+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86786048 unmapped: 24371200 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:32.223240+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:33.223362+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:34.223494+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:35.223796+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:36.223968+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:37.224116+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:38.224254+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099725 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:39.224393+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:40.224590+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:41.224716+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:42.224991+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.881244659s of 43.207160950s, submitted: 14
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdec000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:43.225172+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099077 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:44.225358+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:45.225497+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:46.225726+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:47.225965+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:48.226129+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:49.226281+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:50.226441+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:51.226716+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:52.226846+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:53.227001+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:54.227132+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:55.227269+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:56.227472+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:57.227670+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:58.227861+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:59.228097+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:00.228334+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:01.228505+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:02.228698+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:03.228847+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:04.229034+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:05.229193+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.107103348s of 22.897233963s, submitted: 90
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:06.229334+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:07.229883+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:08.230051+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:09.230203+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:10.230458+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:11.230887+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:12.231051+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:13.231202+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:14.231362+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:15.231558+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:16.231722+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:17.231897+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:18.232061+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:19.232264+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:20.232409+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:21.232583+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:22.232748+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:23.233185+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:24.233346+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:25.233499+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:26.233635+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:27.233866+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:28.234028+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:29.234176+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:30.234337+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:31.234506+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:32.234623+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:33.234790+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:34.234942+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:35.235170+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:36.235337+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:37.235555+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:38.235702+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:39.235888+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:40.236482+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:41.236692+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:42.237015+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:43.237155+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:44.237314+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:45.237431+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:46.237588+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:47.237824+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:48.238095+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:49.238332+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:50.238608+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:51.238874+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:52.239088+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:53.239246+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:54.239461+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:55.240521+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:56.240812+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:57.241007+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:58.241216+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:59.241435+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:00.241638+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:01.241874+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:02.242075+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:03.242277+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:04.242516+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:05.243140+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:06.243345+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:07.243495+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:08.243732+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:09.243870+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:10.244101+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:11.244282+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:12.244498+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:13.244719+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:14.244938+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:15.245128+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:16.245337+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:17.245532+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:18.245693+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:19.245838+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:20.246014+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:21.246160+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:22.246289+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:23.246734+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:24.246895+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:25.247029+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:26.247235+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:27.247452+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:28.247597+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:29.247705+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:30.247879+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:31.248044+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:32.248180+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:33.248322+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:34.248587+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:35.248705+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:36.248911+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:37.249056+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:38.249263+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:39.249411+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:40.249730+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:41.249899+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:42.250146+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:43.250372+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:44.250530+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:45.250705+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:46.250835+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:47.250981+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:48.251093+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:49.251262+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:50.251535+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:51.251739+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:52.251887+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:53.252070+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:54.252288+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:55.252440+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:56.252625+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:57.252875+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:58.253040+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:59.253177+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:00.253351+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:01.253515+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:02.253735+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:03.253932+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:04.254062+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:05.254222+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:06.254368+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:07.254547+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:08.254693+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:09.254824+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:10.254957+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:11.255106+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:12.255270+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:13.255432+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:14.255555+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:15.255717+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:16.255899+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:17.256056+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:18.256178+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:19.256389+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:20.256582+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:21.256750+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:22.256928+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:23.257077+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:24.257240+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:25.257511+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:26.257697+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:27.257827+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:28.258065+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:29.258211+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:30.258438+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:31.258614+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:32.258760+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:33.258896+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:34.259016+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:35.259149+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:36.259267+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:37.259400+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:38.259583+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:39.259702+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:40.259908+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:41.260052+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:42.260177+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:43.260307+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:44.260506+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:45.260643+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:46.260804+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:47.260942+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:48.261091+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:49.261335+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:50.261569+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:51.261752+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:52.261896+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:53.262092+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:54.262246+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:55.262375+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:56.262547+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:57.262695+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:58.262839+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:59.262976+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:00.263140+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:01.263320+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:02.263474+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:03.263576+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:04.263736+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:05.263906+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:06.264035+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:07.264166+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:08.264296+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:09.264440+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:10.264614+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:11.264731+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:12.264847+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:13.265007+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:14.265134+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:15.265276+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:16.265421+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:17.265723+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:18.265866+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:19.266061+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:20.266272+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:21.266437+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:22.266581+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:23.266737+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:24.266927+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:25.267091+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:26.267257+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:27.267407+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:28.267540+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:29.267686+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:30.267837+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:31.267967+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:32.268104+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:33.268218+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:34.268341+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:35.268528+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:36.269057+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:37.269765+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:38.269914+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:39.270049+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:40.270439+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:41.270686+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:42.270905+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:43.271048+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:44.271317+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:45.271818+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:46.271960+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:47.272082+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:48.272223+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:49.272356+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:50.272638+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:51.272785+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:52.273026+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:53.273288+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:54.273537+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:55.273735+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:56.273906+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:57.274182+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:58.274405+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:59.274528+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:00.274690+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 24240128 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:01.274823+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:02.281257+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:03.281448+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:04.281757+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:05.281929+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:06.282095+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:07.282251+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:08.282393+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:09.282561+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:10.282726+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:11.282922+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:12.283092+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:13.283235+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:14.283383+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:15.283516+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:16.283641+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:17.283819+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:18.283941+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:19.284196+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:20.284364+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:21.284507+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:22.284642+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:23.284835+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:24.284957+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:25.285155+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:26.285495+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:27.285683+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:28.285816+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:29.286009+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:30.286188+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:31.286341+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 24231936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:32.286480+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:33.286688+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:34.286841+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:35.287010+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:36.287150+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:37.287379+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:38.287531+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:39.287682+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:40.287846+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:41.287983+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:42.288135+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:43.288307+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:44.288469+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:45.288725+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:46.288872+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:47.289064+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:48.289182+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:49.289458+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:50.289622+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:51.289749+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:52.289876+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:53.290065+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:54.290197+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:55.290355+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:56.290509+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:57.290700+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:58.290834+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:59.290952+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 24223744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:00.291118+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:01.291251+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:02.291369+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:03.291566+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:04.291724+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:05.291889+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:06.292050+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:07.292172+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:08.292357+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:09.292487+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:10.292643+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:11.292901+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:12.293070+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:13.293229+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:14.293414+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:15.293575+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:16.293735+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:17.293931+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:18.294089+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:19.294237+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:20.294398+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:21.294580+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:22.294733+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:23.294899+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:24.295036+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:25.295230+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:26.295362+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:27.295554+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:28.295696+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:29.295868+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:30.296038+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:31.296192+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:32.296346+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:33.296514+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:34.296774+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:35.297112+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:36.297321+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:37.297524+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:38.297824+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:39.297989+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:40.298190+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:41.298405+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:42.298585+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:43.298790+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:44.299020+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:45.299181+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:46.299388+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:47.299553+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:48.299760+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:49.299912+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:50.300104+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:51.300266+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:52.300486+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:53.300719+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:54.300895+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:55.301053+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:56.301237+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:57.301535+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:58.301728+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:59.301990+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:00.302206+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:01.302422+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:02.302589+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:03.302721+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:04.302884+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:05.303030+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:06.303177+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:07.303329+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:08.303610+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:09.303824+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:10.304018+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:11.304218+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:12.304387+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:13.304625+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:14.304918+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:15.305118+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:16.305251+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:17.305394+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:18.305587+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:19.305769+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:20.306016+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:21.306226+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:22.306487+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:23.306734+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:24.306957+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:25.307213+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:26.307434+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:27.307601+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:28.307852+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:29.308113+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:30.308319+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:31.308539+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:32.308736+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:33.308995+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:34.309196+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:35.309386+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:36.309555+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:37.309780+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:38.310037+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:39.310335+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:40.310617+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:41.310793+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:42.310957+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:43.311183+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:44.311345+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:45.311529+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:46.311709+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:47.311885+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:48.312092+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:49.312337+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:50.312640+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:51.312863+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 24207360 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:52.313067+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:53.313292+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:54.313469+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:55.313775+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:56.313953+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:57.314183+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:58.314363+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:59.314527+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:00.314727+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:01.314931+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:02.315099+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:03.315305+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:04.315482+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:05.315736+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:06.315913+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:07.316110+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:08.316283+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:09.316481+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:10.316769+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:11.316953+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:12.317103+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:13.317284+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:14.317481+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:15.317728+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:16.317978+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:17.318179+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:18.318360+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:19.318550+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:20.318762+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:21.319000+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:22.319193+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:23.319420+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:24.319600+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:25.319836+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:26.320027+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:27.320249+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:28.320418+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:29.320629+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:30.320936+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:31.321143+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:32.321327+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:33.321518+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:34.321731+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:35.321956+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:36.322143+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:37.322317+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:38.322530+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:39.322820+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:40.323089+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:41.323276+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:42.323497+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:43.323666+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:44.323831+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:45.324110+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:46.324364+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:47.324575+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:48.324755+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:49.324978+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:50.325349+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:51.325605+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:52.325839+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:53.326040+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:54.326239+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:55.326412+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 7919 writes, 31K keys, 7919 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7919 writes, 1754 syncs, 4.51 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 369 writes, 967 keys, 369 commit groups, 1.0 writes per commit group, ingest: 0.43 MB, 0.00 MB/s
                                           Interval WAL: 369 writes, 163 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:56.326619+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:57.326797+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:58.326964+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:59.327166+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:00.327363+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:01.327543+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:02.327750+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:03.327981+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:04.328173+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:05.328427+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:06.328619+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:07.328856+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:08.329137+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:09.329315+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 24199168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:10.329545+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:11.329776+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:12.329997+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:13.330234+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:14.330520+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:15.330767+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:16.330961+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:17.331128+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:18.331295+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:19.331470+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:20.331736+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:21.331854+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:22.332015+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:23.332154+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:24.332343+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:25.332453+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:26.332565+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:27.332713+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:28.332860+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:29.333034+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:30.333215+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:31.333402+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:32.333540+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:33.333634+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:34.333755+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:35.333876+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:36.333998+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:37.334115+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:38.334243+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:39.334381+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:40.334548+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:41.334688+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:42.334793+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:43.334953+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:44.335091+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:45.335286+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:46.335430+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:47.335560+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:48.336119+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:49.336281+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24190976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:50.336517+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:51.336753+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:52.336939+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:53.337181+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:54.337378+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:55.337550+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:56.337705+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:57.338138+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:58.338505+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:59.338925+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:00.339364+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:01.339710+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:02.340012+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:03.340166+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:04.340399+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:05.340743+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:06.341007+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:07.341151+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:08.341318+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:09.341485+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:10.341734+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:11.341842+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:12.342098+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:13.342255+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:14.342531+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:15.342773+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:16.342983+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:17.343188+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:18.343332+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:19.343461+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:20.343733+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:21.343882+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:22.344093+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:23.344411+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:24.344583+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:25.344813+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:26.345063+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:27.345190+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:28.345360+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:29.345496+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:30.345692+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:31.345852+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:32.346047+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:33.346189+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:34.346359+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:35.346474+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:36.346629+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:37.346818+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:38.346951+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:39.347183+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:40.347416+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:41.347611+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:42.347776+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 576.823486328s of 577.394714355s, submitted: 22
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:43.347916+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:44.348052+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:45.348221+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099077 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:46.348371+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:47.348573+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:48.348769+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:49.348972+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 24215552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:50.349164+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099077 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 24182784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:51.349289+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 24150016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:52.349485+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.513687134s of 10.063464165s, submitted: 82
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 24150016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:53.349629+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:54.349863+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:55.350053+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:56.350209+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:57.350412+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:58.350548+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:59.350683+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:00.350848+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:01.351039+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:02.351196+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:03.351420+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:04.351595+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:05.351803+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:06.352072+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:07.352289+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:08.352489+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:09.352633+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:10.352845+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:11.353067+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:12.353214+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:13.353422+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:14.353571+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:15.353707+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:16.353830+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:17.353995+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:18.354128+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:19.354299+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:20.354455+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:21.354606+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:22.354759+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:23.354934+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:24.355157+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:25.355304+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:26.355441+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:27.355620+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:28.355799+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:29.355987+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:30.356168+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:31.356315+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:32.356468+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:33.358760+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:34.358892+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:35.359020+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:36.359180+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:37.359360+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:38.359548+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:39.359731+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:40.360005+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:41.360164+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:01 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:01 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:42.360345+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:43.360486+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:01 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:44.360615+0000)
Jan 31 08:49:01 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:01 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:45.360803+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:46.360929+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:47.361105+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:48.361305+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:49.361425+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:50.361570+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:51.361769+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:52.361923+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:53.362067+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:54.362195+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:55.362338+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:56.362462+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:57.362610+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:58.362744+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:59.362925+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:00.363084+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:01.363286+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:02.363413+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:03.363533+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:04.363739+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:05.363874+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:06.363994+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:07.364113+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:08.364270+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:09.364421+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:10.364614+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:11.364730+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:12.364866+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:13.365095+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:14.365260+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:15.365427+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:16.365585+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:17.365762+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:18.365917+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:19.366080+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:20.366248+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:21.366414+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:22.366556+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:23.366722+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:24.366884+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:25.367050+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:26.367231+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:27.367396+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:28.367559+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:29.367727+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:30.367904+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:31.368049+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:32.368306+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:33.368496+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:34.368633+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:35.368761+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:36.368951+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:37.369127+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:38.369273+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:39.369490+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:40.369751+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:41.369921+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:42.370106+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:43.370270+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:44.370429+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:45.370618+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:46.370712+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:47.370960+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:48.371129+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:49.371284+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:50.371505+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:51.371692+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:52.371891+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:53.372042+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:54.372214+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:55.372412+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:56.372538+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:57.372728+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:58.372891+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:59.373067+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:00.373316+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:01.373535+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:02.373806+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:03.374007+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:04.374221+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:05.374405+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:06.374560+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:07.374732+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:08.374872+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:09.375051+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:10.375263+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:11.375460+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:12.375646+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:13.375831+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:14.376021+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:15.376153+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:16.376370+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:17.376616+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:18.376735+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:19.376921+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:20.377092+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:21.377285+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:22.377526+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:23.377791+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:24.377961+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:25.378097+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:26.378274+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:27.378430+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:28.378626+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:29.378924+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:30.379086+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:31.379281+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:32.379479+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:33.379630+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:34.379826+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:35.380026+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:36.380219+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:37.380408+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:38.385277+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:39.385496+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:40.385724+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:41.385939+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:42.386279+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:43.386458+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:44.386698+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:45.386870+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:46.387048+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:47.387180+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:48.387342+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:49.387711+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:50.387979+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:51.388132+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:52.388324+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:53.388508+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:54.388730+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:55.388971+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets getting new tickets!
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:56.389162+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _finish_auth 0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:56.390186+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:57.389293+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:58.389428+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:59.389600+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:00.389862+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:01.390051+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 23101440 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:02.390199+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: mgrc ms_handle_reset ms_handle_reset con 0x556c5f115c00
Jan 31 08:49:02 compute-0 ceph-osd[86929]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3272136490
Jan 31 08:49:02 compute-0 ceph-osd[86929]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3272136490,v1:192.168.122.100:6801/3272136490]
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: get_auth_request con 0x556c5e1c3c00 auth_method 0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88219648 unmapped: 22937600 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:03.390407+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88219648 unmapped: 22937600 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:04.390554+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88219648 unmapped: 22937600 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:05.390731+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 ms_handle_reset con 0x556c5f162c00 session 0x556c5f0d2c40
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f162400
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:06.390950+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 ms_handle_reset con 0x556c5f108c00 session 0x556c5d6ce8c0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f163400
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 ms_handle_reset con 0x556c5c4b9000 session 0x556c5f0f0c40
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f108c00
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:07.391274+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:08.391434+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:09.391619+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:10.391880+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:11.392087+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:12.392375+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:13.392556+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:14.392744+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:15.392983+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:16.393139+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:17.393508+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:18.393677+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:19.393883+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:20.394112+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:21.394265+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:22.394598+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:23.394748+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:24.394894+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:25.395032+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:26.395179+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:27.395840+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:28.395960+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:29.396100+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:30.396309+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:31.396459+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:32.396697+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:33.396850+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:34.397169+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:35.397632+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:36.397831+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:37.398024+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:38.398265+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:39.398422+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:40.398794+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:41.398998+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:42.400168+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:43.400311+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:44.400486+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:45.400763+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:46.401157+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:47.401391+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:48.401543+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:49.401721+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:50.401909+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:51.402077+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:52.402212+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:53.402400+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:54.402594+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:55.402754+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:56.402959+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:57.403187+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:58.403338+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:59.403475+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:00.403638+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:01.403808+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:02.403931+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:03.404093+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:04.404289+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:05.404478+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:06.404633+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:07.404880+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:08.405051+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:09.405223+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:10.405407+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:11.405528+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:12.405703+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:13.405905+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:14.406037+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:15.406210+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:16.406540+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:17.406729+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:18.406884+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:19.407076+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:20.407268+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:21.407474+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:22.407691+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:23.407881+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:24.408120+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:25.408282+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:26.408480+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.408707+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.408874+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.409158+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.409400+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 ms_handle_reset con 0x556c5c483800 session 0x556c5f0f1340
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f15d000
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.410056+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.410270+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.410411+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.410586+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.410754+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.410899+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.411139+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:38.411310+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.411455+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.411624+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.411778+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.411943+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.412084+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.412227+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.412428+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.412556+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.412715+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.412872+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.413040+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.413903+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.414052+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.414234+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.414352+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.414483+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.414750+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.414862+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 23068672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.414985+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.415088+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.415221+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.415457+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.415575+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.415709+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.415871+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.415994+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.416128+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.416323+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.416487+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099005 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 314.789916992s of 315.380981445s, submitted: 8
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.416644+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 23060480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.416805+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 23052288 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fcdee000/0x0/0x4ffc00000, data 0x155b82/0x23e000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.416975+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 23052288 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5e00c400
Jan 31 08:49:02 compute-0 ceph-osd[86929]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.417104+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.417260+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156985 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 14516224 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.417403+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88252416 unmapped: 22904832 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.417576+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88252416 unmapped: 22904832 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 ms_handle_reset con 0x556c5e00c400 session 0x556c5bc06380
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.417713+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.417836+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.417963+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.418085+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.418219+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.418397+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.418549+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.418728+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.418873+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.418995+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.419129+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.419314+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.419455+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.419577+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.419695+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.419887+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.420047+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.420171+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.420309+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.420431+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.420605+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.420763+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.420912+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.421021+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.421188+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.421453+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.421598+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.421851+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.422276+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.422507+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.422709+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.422883+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.423006+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.423178+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.423370+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.423712+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.423876+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.424016+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88236032 unmapped: 22921216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.424247+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.424381+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.424551+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.424749+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.424870+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.425020+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.425201+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.425426+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.425588+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.425743+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.427606+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.427811+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.427948+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.428090+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.428255+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.428415+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.428566+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.428767+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.428948+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.429118+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.429262+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.429449+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.429616+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.429906+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.430045+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.430181+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.430307+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.430491+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.430755+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.430928+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.431088+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.431232+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.431393+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.431599+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.431767+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.431989+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.432170+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.432596+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.432735+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.432951+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.433156+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.433369+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.433565+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.433758+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.433879+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145835 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.433999+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.434291+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x957741/0xa42000, compress 0x0/0x0/0x0, omap 0x16136, meta 0x2bb9eca), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.434615+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.434993+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88244224 unmapped: 22913024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: handle_auth_request added challenge on 0x556c5f10a800
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 92.370269775s of 94.305572510s, submitted: 25
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.435221+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146869 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.435414+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.435774+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 153 ms_handle_reset con 0x556c5f10a800 session 0x556c5dbc81c0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.435961+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fcde8000/0x0/0x4ffc00000, data 0x15930e/0x244000, compress 0x0/0x0/0x0, omap 0x161cf, meta 0x2bb9e31), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:46.436189+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.436371+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105881 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.436514+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.436794+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.437034+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.437235+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fcde8000/0x0/0x4ffc00000, data 0x15930e/0x244000, compress 0x0/0x0/0x0, omap 0x161cf, meta 0x2bb9e31), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88375296 unmapped: 22781952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.437463+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.248749733s of 10.423717499s, submitted: 17
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.437641+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.437919+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.438127+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.438289+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.438455+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _renew_subs
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.438715+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.438883+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.439130+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.439272+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.439432+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.439602+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.439741+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.439904+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.440018+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.440133+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.440388+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.440534+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.440740+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.440894+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.441030+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.441195+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.441419+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.441552+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.441695+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.441831+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.441970+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.442105+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.442260+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.442492+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.442682+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.442910+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.443064+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.443276+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.443422+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.443730+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.443957+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.444113+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.444305+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.444488+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.444693+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.444848+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.445009+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.445246+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.445413+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.445627+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.445909+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.446105+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.446320+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.446520+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.446725+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.446942+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.447106+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.447286+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.447483+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.447696+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.447848+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.447987+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.448148+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.448286+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.448546+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.448702+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.448872+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.449174+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 8190 writes, 32K keys, 8190 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8190 writes, 1880 syncs, 4.36 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 271 writes, 571 keys, 271 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 271 writes, 126 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.449328+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.449544+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.449733+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.449889+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.450108+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.450248+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.450512+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.450819+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.450986+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.451151+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.451238+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.451394+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.451523+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.451688+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.451866+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.452014+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.452179+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.452381+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.452611+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.452756+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.452884+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.453063+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.453244+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.453449+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.453615+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.453728+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.453844+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.453937+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.454058+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.454197+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:26.454373+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:27.454539+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88391680 unmapped: 22765568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:28.454692+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 22642688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'config diff' '{prefix=config diff}'
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'config show' '{prefix=config show}'
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:29.454819+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:02 compute-0 ceph-osd[86929]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 22454272 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109375 data_alloc: 218103808 data_used: 19382
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 08:49:02 compute-0 ceph-osd[86929]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fcde3000/0x0/0x4ffc00000, data 0x15ad8d/0x247000, compress 0x0/0x0/0x0, omap 0x162f0, meta 0x2bb9d10), peers [0,2] op hist [])
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:30.454976+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 22257664 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: tick
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_tickets
Jan 31 08:49:02 compute-0 ceph-osd[86929]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:31.455114+0000)
Jan 31 08:49:02 compute-0 ceph-osd[86929]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 21995520 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:02 compute-0 ceph-osd[86929]: do_command 'log dump' '{prefix=log dump}'
Jan 31 08:49:02 compute-0 rsyslogd[1001]: imjournal from <np0005603654:ceph-osd>: begin to drop messages due to rate-limiting
Jan 31 08:49:02 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14738 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:02 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:49:02 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:02 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:02 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2894993142' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 08:49:02 compute-0 ceph-mon[75294]: from='client.14734 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:02 compute-0 ceph-mon[75294]: from='client.14738 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 31 08:49:02 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/429864024' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 31 08:49:02 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:03 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14744 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:03 compute-0 ceph-mgr[75591]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:49:03 compute-0 ceph-dc03f344-536f-5591-add9-31059f42637c-mgr-compute-0-lhuavc[75587]: 2026-01-31T08:49:03.200+0000 7f9067f84640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:49:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 31 08:49:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811368229' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 31 08:49:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 31 08:49:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246693678' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 31 08:49:03 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 31 08:49:03 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702202729' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 31 08:49:03 compute-0 ceph-mon[75294]: pgmap v1710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:03 compute-0 ceph-mon[75294]: from='client.14740 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:03 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/429864024' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 31 08:49:03 compute-0 ceph-mon[75294]: from='client.14744 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:03 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2811368229' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 31 08:49:03 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4246693678' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 31 08:49:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 31 08:49:04 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208696655' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 31 08:49:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 31 08:49:04 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358245822' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 31 08:49:04 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:04 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 31 08:49:04 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1377989417' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 31 08:49:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1702202729' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 31 08:49:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/208696655' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 31 08:49:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1358245822' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 31 08:49:04 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1377989417' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 31 08:49:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 31 08:49:05 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1321597763' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 31 08:49:05 compute-0 podman[267391]: 2026-01-31 08:49:05.220531724 +0000 UTC m=+0.084146893 container health_status df7308bbab10e1343c545215877d67b5e8cda0519015433eb4478325f482c531 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 08:49:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 31 08:49:05 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/637115969' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 31 08:49:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 31 08:49:05 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022933661' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 31 08:49:05 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 31 08:49:05 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176308253' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 31 08:49:05 compute-0 ceph-mon[75294]: pgmap v1711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:05 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1321597763' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 31 08:49:05 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/637115969' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 31 08:49:05 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4022933661' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 31 08:49:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241229250' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 08:49:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901501562' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:06.480188+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:07.480344+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:08.480509+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:09.480690+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:10.480843+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:11.481051+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:12.481178+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:13.481280+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:14.481413+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:15.481574+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:16.481719+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:17.481832+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:18.482001+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:19.482172+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 90112 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:20.482284+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 90112 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:21.482430+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 90112 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:22.482537+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 90112 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:23.482736+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 90112 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:24.482859+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 90112 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:25.483034+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:26.483211+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:27.483455+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:28.483573+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:29.483730+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:30.483873+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:31.484011+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:32.484182+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:33.484305+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:34.484448+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:35.484577+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:36.484770+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:37.484927+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:38.485061+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:39.485220+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:40.485356+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:41.485484+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:42.485626+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:43.485874+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:44.486060+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:45.486201+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:46.486345+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:47.486467+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:48.486573+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:49.486689+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:50.486888+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:51.487049+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:52.487836+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:53.488039+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:54.488361+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:55.488639+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:56.488816+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:57.489226+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:58.489597+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:15:59.489952+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:00.490322+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:01.490476+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:02.490677+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:03.490975+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:04.491139+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:05.491395+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:06.491603+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:07.491787+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 65536 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:08.492049+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:09.492264+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:10.492520+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:11.492740+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:12.493023+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:13.493209+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:14.493498+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:15.493643+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:16.493900+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:17.494040+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:18.494199+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:19.494321+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:20.494441+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:21.494593+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 49152 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:22.494735+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 40960 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:23.494853+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 40960 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:24.495025+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 40960 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:25.495180+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16384 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:26.495384+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16384 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:27.495530+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread fragmentation_score=0.000117 took=0.000019s
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 8192 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:28.495631+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:29.495783+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:30.495983+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:31.496123+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:32.496275+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:33.496423+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:34.496536+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:35.496740+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:36.497208+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:37.497321+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:38.497521+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:39.497946+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:40.498259+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:41.498406+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 1040384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:42.498526+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:43.498672+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:44.498800+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:45.498937+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:46.499169+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:47.499295+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:48.499431+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:49.499561+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:50.499693+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:51.499830+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:52.499950+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:53.500070+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:54.500208+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:55.500416+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:56.500738+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 1015808 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:57.500888+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:58.501048+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:16:59.501184+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:00.501329+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:01.501472+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:02.501606+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:03.501716+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:04.501863+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:05.502034+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:06.502234+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:07.502371+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 999424 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:08.502515+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:09.502722+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:10.502881+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:11.503030+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:12.503163+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:13.503299+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:14.503441+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:15.503566+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:16.503731+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:17.503939+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:18.508095+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:19.508226+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:20.508390+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:21.508542+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:22.508725+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:23.508958+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:24.509125+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:25.509317+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:26.509496+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 958464 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:27.509610+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:28.509788+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:29.510073+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:30.510252+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:31.510436+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:32.510602+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:33.510737+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:34.510938+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:35.511064+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:36.511257+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:37.511420+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 933888 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:38.511601+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 933888 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:39.511768+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 933888 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:40.511909+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 933888 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:41.512072+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 933888 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:42.512267+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:43.512416+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:44.512587+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:45.512740+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:46.512917+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:47.513056+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 909312 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:48.513179+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 909312 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:49.513307+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 909312 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:50.513459+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 909312 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:51.513603+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5851 writes, 24K keys, 5851 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5851 writes, 997 syncs, 5.87 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a0874b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563a6a087a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 876544 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:52.513750+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 851968 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:53.514476+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 851968 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:54.514598+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 851968 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:55.514718+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 851968 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:56.514912+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 851968 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:57.515091+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 851968 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:58.515259+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 851968 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:59.515458+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:00.515633+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:01.515825+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:02.516003+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:03.516336+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:04.516588+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:05.517054+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:06.517275+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 843776 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:07.517636+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:08.518052+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:09.518360+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:10.518581+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:11.518728+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:12.518960+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:13.519554+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:14.520076+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:15.520354+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 827392 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:16.520564+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 819200 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:17.520901+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 819200 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:18.521185+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 819200 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:19.521466+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 819200 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:20.521594+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 819200 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:21.521839+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 811008 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:22.522120+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 811008 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:23.522363+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 811008 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:24.522535+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 811008 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:25.522724+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 802816 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:26.522895+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 802816 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:27.523074+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:28.523253+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:29.523499+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:30.523640+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:31.523873+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:32.524091+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:33.524357+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:34.524536+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:35.524870+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 786432 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:36.526427+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 778240 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:37.526882+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 778240 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:38.528181+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 778240 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:39.528741+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 770048 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:40.529260+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 770048 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:41.529427+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:42.529725+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:43.530004+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:44.530408+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:45.530673+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:46.530984+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:47.531163+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:48.531433+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:49.531619+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:50.531948+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:51.532090+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:52.532210+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:53.532453+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:54.532625+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:55.532711+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:56.533048+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:57.533444+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:58.533796+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:59.534041+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:00.534686+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:01.535020+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:02.535218+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:03.535380+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:04.535609+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:05.535838+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:06.536084+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:07.536225+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:08.536488+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:09.536759+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:10.537038+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:11.537268+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:12.537501+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:13.537796+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:14.537983+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:15.538190+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:16.538428+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:17.538602+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:18.538869+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:19.539211+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:20.539421+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:21.539584+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:22.539772+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:23.539917+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:24.540136+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 753664 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:25.540427+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 745472 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:26.540691+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 745472 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:27.540842+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 745472 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:28.541023+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 745472 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:29.541174+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 745472 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:30.541412+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 745472 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:31.541554+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 745472 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:32.541761+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:33.541911+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:34.542064+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:35.542195+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:36.542440+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:37.542597+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:38.542787+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:39.542919+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:40.543076+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:41.543247+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 729088 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:42.543394+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 277.798919678s of 277.971801758s, submitted: 10
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 720896 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:43.543555+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 704512 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:44.543719+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 688128 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:45.543898+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 655360 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:46.544068+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 647168 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029257 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:47.544213+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 622592 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:48.544344+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 606208 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:49.544458+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 606208 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:50.544594+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 606208 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:51.544730+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 581632 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:52.544902+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.151225567s of 10.187254906s, submitted: 60
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 565248 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:53.545084+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 532480 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:54.545210+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 516096 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:55.545317+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 483328 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:56.545590+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 434176 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:57.545710+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [0,0,0,1])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 434176 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:58.546012+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 425984 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:59.546143+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 425984 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:00.546386+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 425984 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:01.546531+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 425984 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:02.546813+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 417792 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:03.547002+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 417792 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:04.547187+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:05.547343+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:06.547595+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:07.547739+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:08.547936+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:09.548142+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:10.548322+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:11.548532+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 409600 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:12.548729+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:13.548895+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:14.549094+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:15.549236+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:16.549708+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:17.549871+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:18.550237+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:19.550393+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:20.550754+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:21.550892+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:22.551155+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:23.551307+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:24.551601+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:25.551746+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:26.552054+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:27.552204+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:28.552352+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:29.552545+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:30.553068+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:31.553206+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:32.553425+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:33.553569+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:34.553791+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:35.553956+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:36.554119+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:37.554262+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:38.554447+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:39.554600+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:40.554766+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:41.554912+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:42.555149+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029185 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:43.555284+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6b2eac00
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xcd282/0x1a3000, compress 0x0/0x0/0x0, omap 0x188d5, meta 0x2bb772b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.092628479s of 51.495975494s, submitted: 46
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:44.555476+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 212992 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:45.555629+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 212992 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:46.555820+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 212992 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:47.555971+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035453 data_alloc: 218103808 data_used: 6480
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:48.556157+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 140 ms_handle_reset con 0x563a6b2eac00 session 0x563a6bbd1500
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 212992 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:49.556356+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6b2eb000
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xd25e9/0x1ad000, compress 0x0/0x0/0x0, omap 0x1907d, meta 0x2bb6f83), peers [1,2] op hist [0,0,1])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 73728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:50.556572+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:51.556705+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 16736256 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:52.556820+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082764 data_alloc: 218103808 data_used: 7065
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _renew_subs
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 16736256 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:53.556939+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 16736256 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _renew_subs
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.301296234s of 10.082432747s, submitted: 35
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:54.557073+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc674000/0x0/0x4ffc00000, data 0x8d5c27/0x9b4000, compress 0x0/0x0/0x0, omap 0x1966b, meta 0x2bb6995), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 142 ms_handle_reset con 0x563a6b2eb000 session 0x563a6df05a40
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:55.557209+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:56.557448+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc674000/0x0/0x4ffc00000, data 0x8d5c27/0x9b4000, compress 0x0/0x0/0x0, omap 0x1966b, meta 0x2bb6995), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:57.557615+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089296 data_alloc: 218103808 data_used: 7065
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:58.557780+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:59.557954+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:00.558124+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:01.558271+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc674000/0x0/0x4ffc00000, data 0x8d5c27/0x9b4000, compress 0x0/0x0/0x0, omap 0x1966b, meta 0x2bb6995), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:02.558472+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089296 data_alloc: 218103808 data_used: 7065
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16719872 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:03.558624+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e630000
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 16580608 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:04.558817+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 16572416 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.656755447s of 11.020708084s, submitted: 4
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:05.558942+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc674000/0x0/0x4ffc00000, data 0x8d77f4/0x9b6000, compress 0x0/0x0/0x0, omap 0x19885, meta 0x2bb677b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 16572416 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:06.559138+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 16572416 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc674000/0x0/0x4ffc00000, data 0x8d77f4/0x9b6000, compress 0x0/0x0/0x0, omap 0x19885, meta 0x2bb677b), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:07.559280+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092338 data_alloc: 218103808 data_used: 7065
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 16523264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:08.559456+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 143 ms_handle_reset con 0x563a6e630000 session 0x563a6d105180
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 16523264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:09.559577+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 16523264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:10.559747+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 16523264 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc676000/0x0/0x4ffc00000, data 0x8d77f4/0x9b6000, compress 0x0/0x0/0x0, omap 0x19a43, meta 0x2bb65bd), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:11.559921+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e631400
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 16392192 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:12.560050+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095328 data_alloc: 218103808 data_used: 7065
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:13.560184+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _renew_subs
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:14.560353+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 15220736 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.831480026s of 10.070364952s, submitted: 55
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:15.560479+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 145 ms_handle_reset con 0x563a6e631400 session 0x563a6bbd0e00
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 15212544 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:16.560672+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc66e000/0x0/0x4ffc00000, data 0xdae40/0x1bb000, compress 0x0/0x0/0x0, omap 0x1a113, meta 0x2bb5eed), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 15212544 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:17.560792+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057198 data_alloc: 218103808 data_used: 7065
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 15212544 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:18.560995+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 15212544 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:19.561163+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc66e000/0x0/0x4ffc00000, data 0xdae40/0x1bb000, compress 0x0/0x0/0x0, omap 0x1a113, meta 0x2bb5eed), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 15212544 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:20.561346+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 15212544 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:21.561497+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 15212544 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:22.561719+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057198 data_alloc: 218103808 data_used: 7065
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:23.561857+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:24.562017+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:25.562234+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:26.562425+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:27.562557+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:28.562743+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:29.562910+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:30.563124+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:31.563320+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:32.563518+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:33.563714+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79167488 unmapped: 15204352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:34.563907+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:35.564122+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:36.564380+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:37.564526+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:38.564681+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:39.564804+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:40.564988+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:41.565220+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:42.565402+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:43.565614+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:44.565894+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:45.566091+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:46.566384+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:47.566700+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:48.566858+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:49.567016+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:50.567186+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:51.567390+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:52.567554+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:53.567723+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:54.567882+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:55.568063+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:56.568243+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:57.568404+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:58.568568+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:59.568768+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:00.568943+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:01.569131+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:02.569342+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:03.569495+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:04.569773+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:05.569926+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:06.570100+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:07.570243+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:08.570436+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:09.570569+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:10.570711+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:11.570920+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:12.571137+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:13.571320+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:14.571505+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:15.571706+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:16.571944+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:17.572113+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:18.572332+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:19.572737+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:20.572986+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:21.573208+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:22.573446+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:23.573581+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:24.573812+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:25.573996+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:26.574261+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:27.574446+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:28.574624+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:29.574725+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:30.574854+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:31.575024+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:32.575137+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:33.575347+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:34.575493+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:35.575733+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:36.575962+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:37.576173+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:38.576331+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:39.576494+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:40.576703+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:41.576860+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:42.577046+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:43.577644+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:44.577838+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:45.578010+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:46.578224+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:47.578492+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:48.578725+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:49.578937+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:50.579073+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:51.579272+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:52.579541+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:53.579925+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:54.580056+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:55.580180+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:56.580362+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:57.580466+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:58.580594+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:59.580805+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:00.581011+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:01.581163+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:02.581303+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:03.581675+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:04.581823+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:05.582003+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:06.582168+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:07.582388+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:08.582608+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:09.582961+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:10.583264+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:11.583618+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:12.583909+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:13.584129+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:14.584328+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:15.584511+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:16.584785+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:17.584974+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:18.585200+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:19.585363+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:20.585515+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:21.585772+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:22.586016+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:23.586251+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:24.586452+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:25.586634+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:26.586872+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:27.587099+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:28.587305+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:29.587535+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:30.587711+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:31.587908+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:32.588039+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:33.588219+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:34.588498+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:35.588709+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:36.588880+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15335424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:37.589076+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:38.589207+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:39.589420+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:40.589580+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:41.589721+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:42.589852+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:43.590057+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:44.590222+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:45.590396+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:46.590586+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:47.590759+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:48.590906+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:49.591071+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:50.591198+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:51.591517+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:52.591729+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:53.591881+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:54.592018+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:55.592180+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:56.596364+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:57.596521+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:58.596684+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:59.596840+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:00.596967+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:01.597088+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:02.597232+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:03.597370+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:04.597501+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:05.597638+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:06.597856+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:07.597986+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:08.598117+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:09.598241+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:10.598366+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:11.598524+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:12.598741+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:13.598886+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:14.599067+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:15.599248+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:16.599418+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:17.599534+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:18.599703+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:19.599847+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:20.600026+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:21.600177+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:22.600307+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:23.600468+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:24.600724+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:25.600914+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:26.601124+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:27.601297+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:28.601751+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:29.601943+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:30.602216+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:31.602498+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:32.602754+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:33.602957+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:34.603188+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:35.603356+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:36.603579+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:37.603770+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:38.603969+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:39.604183+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:40.604382+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:41.604537+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:42.604696+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:43.604843+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:44.604980+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:45.605115+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:46.605607+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:47.605724+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:48.605853+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:49.605972+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:50.606520+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:51.606688+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:52.606822+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:53.606984+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:54.607168+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:55.607329+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:56.607479+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:57.607611+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:58.607751+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:59.607881+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:00.608013+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:01.608123+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:02.608260+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:03.608387+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:04.608552+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:05.608704+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:06.609110+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:07.609315+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:08.609407+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:09.609511+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:10.609689+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:11.609866+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:12.610043+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:13.610175+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:14.610327+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:15.610468+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 15327232 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:16.610695+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:17.610839+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:18.611043+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:19.611294+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:20.611480+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:21.611609+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:22.611772+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:23.611922+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:24.612073+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:25.612218+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:26.612425+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 15319040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:27.612582+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:28.612774+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:29.612900+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:30.613079+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:31.613226+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:32.613367+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:33.613502+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:34.613738+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:35.613984+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:36.614221+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:37.614408+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:38.614622+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:39.615024+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:40.615344+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:41.615535+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:42.615821+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:43.616048+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:44.616220+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:45.616431+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:46.616636+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:47.616905+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:48.617171+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:49.617356+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:50.617606+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:51.617805+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:52.617958+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:53.618187+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:54.618388+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:55.618628+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:56.618944+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:57.619161+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:58.619340+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:59.619500+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:00.619685+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:01.619864+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:02.620028+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:03.620224+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:04.620405+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:05.620632+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:06.620982+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:07.621152+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:08.621387+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:09.621535+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:10.621672+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:11.621871+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:12.622011+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:13.622130+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:14.622279+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:15.622473+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:16.622727+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:17.622844+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:18.622999+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:19.623136+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:20.623280+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:21.623446+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:22.623589+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:23.623742+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:24.623875+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:25.624038+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:26.624288+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:27.624455+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:28.624586+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:29.624712+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:30.624862+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:31.625014+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:32.625154+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:33.625273+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:34.625449+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:35.625603+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:36.625788+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:37.625911+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:38.626046+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:39.626199+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:40.626358+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:41.626555+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:42.626709+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:43.626841+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:44.626959+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:45.627110+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:46.627293+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:47.627460+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:48.627648+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:49.627831+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:50.627966+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:51.628110+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 15310848 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:52.628257+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:53.628502+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:54.628687+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:55.628841+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:56.629002+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:57.629124+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:58.629268+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:59.629431+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:00.629557+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:01.629826+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:02.629996+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:03.630148+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:04.630321+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:05.630490+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 15302656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:06.630682+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:07.630856+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:08.631027+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:09.631184+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:10.631311+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:11.631458+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:12.631587+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:13.631789+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:14.631922+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:15.632309+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:16.632467+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:17.632630+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:18.632959+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:19.633121+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:20.633301+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:21.633450+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:22.633622+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:23.633785+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:24.633935+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:25.634089+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:26.634238+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:27.634361+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:28.634518+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:29.634735+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:30.634909+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:31.635128+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:32.635288+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:33.635410+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:34.635551+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:35.635734+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:36.636169+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:37.636329+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:38.636486+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:39.636621+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:40.636793+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:41.636958+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:42.637148+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:43.637299+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:44.637510+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:45.637678+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:46.637955+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:47.638173+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:48.638340+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:49.638552+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:50.638732+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:51.638920+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6227 writes, 25K keys, 6227 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6227 writes, 1162 syncs, 5.36 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 376 writes, 883 keys, 376 commit groups, 1.0 writes per commit group, ingest: 0.48 MB, 0.00 MB/s
                                           Interval WAL: 376 writes, 165 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:52.639057+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:53.639218+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:54.639344+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:55.639568+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:56.639811+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 15294464 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:57.639997+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc ms_handle_reset ms_handle_reset con 0x563a6dcb6800
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3272136490
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3272136490,v1:192.168.122.100:6801/3272136490]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: get_auth_request con 0x563a6e001000 auth_method 0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:58.640230+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:59.640400+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:00.640615+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:01.640721+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:02.640843+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:03.640978+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:04.641112+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:05.641295+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:06.641525+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 ms_handle_reset con 0x563a6b2ea400 session 0x563a6e453340
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6b2eac00
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:07.641717+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:08.641860+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:09.642061+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:10.642280+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:11.642533+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:12.642708+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:13.642882+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:14.643291+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:15.643455+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:16.643914+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:17.644479+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:18.644885+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:19.645418+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:20.645731+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:21.646191+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:22.646365+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:23.646684+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059908 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:24.646813+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:25.647065+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdc8bf/0x1be000, compress 0x0/0x0/0x0, omap 0x1a453, meta 0x2bb5bad), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:26.647319+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _renew_subs
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 431.352905273s of 431.746856689s, submitted: 11
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xde45b/0x1c1000, compress 0x0/0x0/0x0, omap 0x1a71d, meta 0x2bb58e3), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:27.647465+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e630000
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:28.647746+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062682 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 147 ms_handle_reset con 0x563a6e630000 session 0x563a6ded8c40
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:29.647858+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:30.647981+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xde45b/0x1c1000, compress 0x0/0x0/0x0, omap 0x1a71d, meta 0x2bb58e3), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:31.648115+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:32.648276+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xdfff7/0x1c4000, compress 0x0/0x0/0x0, omap 0x1a925, meta 0x2bb56db), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 15392768 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:33.648442+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 15376384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065456 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:34.648608+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 15376384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:35.648802+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 15376384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:36.649041+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 15376384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:37.649239+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 15376384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0xdfff7/0x1c4000, compress 0x0/0x0/0x0, omap 0x1a925, meta 0x2bb56db), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:38.649397+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 15376384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:39.649534+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065456 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 15376384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e631800
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:40.649700+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 15237120 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:41.649861+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _renew_subs
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.761420250s of 14.998954773s, submitted: 4
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 15237120 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:42.650051+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fce63000/0x0/0x4ffc00000, data 0xe1be7/0x1c7000, compress 0x0/0x0/0x0, omap 0x1abf2, meta 0x2bb540e), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 15237120 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:43.650254+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 15228928 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:44.650463+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067510 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 149 ms_handle_reset con 0x563a6e631800 session 0x563a6c423a40
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 15220736 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:45.650605+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 15220736 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:46.650826+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 15220736 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:47.650991+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e631c00
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 15048704 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:48.651181+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fce65000/0x0/0x4ffc00000, data 0xe1be7/0x1c7000, compress 0x0/0x0/0x0, omap 0x1ad05, meta 0x2bb52fb), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 15040512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:49.651301+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071004 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 150 ms_handle_reset con 0x563a6e631c00 session 0x563a6df05dc0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:50.651456+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:51.651597+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:52.651969+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:53.652093+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fce60000/0x0/0x4ffc00000, data 0xe380f/0x1ca000, compress 0x0/0x0/0x0, omap 0x1b1f1, meta 0x2bb4e0f), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:54.652222+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071004 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:55.652365+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:56.652627+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fce60000/0x0/0x4ffc00000, data 0xe380f/0x1ca000, compress 0x0/0x0/0x0, omap 0x1b1f1, meta 0x2bb4e0f), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:57.652843+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 15032320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:58.653055+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.970823288s of 16.959716797s, submitted: 37
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:59.653266+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:00.653439+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:01.653639+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:02.653812+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:03.653938+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:04.654110+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:05.654351+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:06.654582+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:07.654717+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:08.654953+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:09.655092+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:10.655247+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:11.655489+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:12.655638+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:13.655854+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:14.656064+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:15.656221+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:16.656407+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:17.656529+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:18.656701+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:19.656870+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:20.657015+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:21.657233+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:22.657495+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:23.657731+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:24.657914+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:25.658065+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:26.658309+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:27.658490+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:28.658716+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:29.658872+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:30.659091+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:31.659272+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:32.659461+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:33.659606+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:34.659790+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:35.659943+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 15007744 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:36.660171+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 ms_handle_reset con 0x563a6c4d0c00 session 0x563a6bbcc700
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e84e000
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 14876672 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:37.660338+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 14876672 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:38.660556+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 14876672 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:39.660771+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073778 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 14876672 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:40.661011+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5d000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 14876672 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:41.661231+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.901233673s of 43.233104706s, submitted: 11
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 14860288 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:42.661384+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 13746176 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:43.661504+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:44.661756+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 13664256 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [0,0,1])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:45.661975+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 13524992 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:46.662214+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 13524992 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:47.662413+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 13524992 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:48.662578+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 13524992 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:49.662737+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 13524992 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:50.662876+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 13524992 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:51.663099+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:52.663347+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:53.663604+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:54.663803+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:55.664015+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:56.664354+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:57.664522+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:58.664824+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:59.664995+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:00.665184+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:01.665410+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:02.665705+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 13516800 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:03.665903+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 13508608 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:04.666074+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 13508608 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.107042313s of 22.957401276s, submitted: 106
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:05.666240+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 13500416 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:06.666432+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 13467648 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:07.666589+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 12394496 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:08.666799+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 12394496 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:09.667043+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 12394496 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:10.667353+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 12394496 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:11.667620+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 12394496 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:12.668009+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:13.668197+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:14.668335+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:15.668557+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:16.668838+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:17.668987+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:18.669250+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:19.669391+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:20.669534+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:21.669740+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:22.669966+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:23.670157+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:24.670327+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:25.670492+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:26.670698+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:27.670948+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:28.671088+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:29.671221+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:30.679540+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:31.679712+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:32.679948+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:33.680092+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:34.680227+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:35.680407+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:36.680616+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:37.680786+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:38.680977+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:39.681120+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:40.681289+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:41.681488+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:42.681743+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:43.681922+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:44.682098+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:45.682310+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:46.682510+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:47.682728+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:48.682898+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:49.683049+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:50.683310+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:51.683512+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:52.683757+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:53.691757+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:54.691987+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:55.692232+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:56.692415+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:57.692595+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:58.692819+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:59.693012+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:00.693179+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:01.693496+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:02.693683+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:03.693853+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:04.694025+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:05.694169+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:06.694436+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:07.694677+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:08.694838+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:09.695027+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:10.695222+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:11.695392+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:12.695541+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:13.695734+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:14.695954+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:15.696252+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:16.696514+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:17.696611+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 12378112 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:18.696745+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:19.696933+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:20.697148+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:21.697308+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:22.697542+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:23.697747+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:24.697911+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:25.698113+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:26.698356+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:27.698546+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:28.698706+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:29.698863+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:30.698981+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:31.699152+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:32.699358+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:33.699520+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:34.699708+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:35.699867+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:36.700038+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:37.700193+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:38.700394+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:39.700636+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:40.700868+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:41.701044+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:42.701323+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:43.701535+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:44.701772+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:45.702061+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:46.702305+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:47.702494+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:48.702699+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:49.702883+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:50.703127+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:51.703304+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:52.703580+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:53.703795+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:54.703956+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:55.704180+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:56.704529+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:57.704745+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:58.704968+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:59.705156+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:00.705321+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:01.705509+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:02.705751+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:03.705982+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:04.706165+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:05.706352+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:06.706533+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:07.706810+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:08.707019+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:09.707256+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:10.707511+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:11.707797+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:12.707959+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:13.708119+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:14.708305+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:15.708566+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:16.708825+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:17.709033+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:18.709194+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:19.709358+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:20.709508+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:21.709683+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:22.709872+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:23.710145+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 12369920 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:24.710318+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:25.710484+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:26.710770+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:27.710940+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:28.711101+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:29.711256+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:30.711437+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:31.711611+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:32.711808+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:33.711957+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:34.712128+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:35.712273+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:36.712440+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:37.712603+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:38.712765+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:39.712918+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:40.713068+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:41.713231+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:42.713607+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:43.713805+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:44.713960+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:45.714174+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:46.714510+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:47.714707+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:48.714877+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:49.715068+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:50.715253+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:51.715372+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:52.715484+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:53.715675+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:54.715805+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:55.715917+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:56.716097+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:57.716253+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:58.716458+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:59.716748+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:00.716963+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:01.717152+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:02.717326+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:03.717445+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:04.717592+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:05.717763+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:06.717996+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:07.718372+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:08.718562+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:09.718691+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:10.718805+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:11.718947+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:12.719134+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:13.719296+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:14.719486+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:15.719698+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:16.720067+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:17.720295+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:18.720537+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:19.720697+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:20.720842+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:21.721007+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 12361728 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:22.721204+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:23.721361+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:24.721477+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:25.721603+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:26.721860+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:27.722059+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:28.722240+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:29.722413+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:30.722547+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:31.722741+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:32.722880+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:33.723049+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:34.723211+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:35.723343+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:36.723683+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:37.723847+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:38.724116+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:39.724369+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:40.724608+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:41.724798+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:42.724991+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:43.725115+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:44.725236+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:45.725406+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:46.725597+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:47.725733+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:48.725837+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:49.725949+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:50.726127+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:51.726315+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:52.726465+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:53.726611+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:54.726751+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:55.726973+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:56.727161+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:57.727302+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 12353536 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:58.727427+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:59.727586+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:00.727705+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:01.727838+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:02.730050+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:03.730244+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:04.730440+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:05.730638+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:06.730908+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:07.731078+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:08.731299+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:09.731807+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:10.731951+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:11.732098+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:12.732379+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:13.732592+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:14.732760+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:15.732953+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:16.733213+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:17.733430+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:18.733725+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:19.733863+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:20.734045+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:21.734386+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:22.734576+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 12345344 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:23.734752+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:24.734987+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:25.735191+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:26.735427+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:27.735686+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:28.735854+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:29.736061+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:30.736278+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:31.736437+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:32.736609+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:33.736778+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:34.737014+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:35.737257+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:36.737484+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:37.737878+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:38.738087+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:39.738426+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:40.738595+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:41.738722+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:42.738862+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:43.739026+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:44.739186+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:45.739364+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:46.739766+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:47.740004+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:48.740243+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:49.740363+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:50.740498+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:51.740615+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:52.740768+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:53.740908+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:54.741083+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:55.741238+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:56.741466+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:57.741718+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:58.741920+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:59.742084+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:00.742218+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:01.742359+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:02.742481+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:03.742607+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:04.742751+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:05.742942+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:06.743167+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:07.743316+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:08.743467+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:09.743636+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:10.744016+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:11.744165+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:12.744323+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:13.744468+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 12337152 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:14.744623+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:15.744763+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:16.744927+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:17.745102+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:18.745305+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:19.745446+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:20.745592+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:21.745729+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:22.745872+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:23.746014+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:24.746180+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:25.746296+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:26.746545+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:27.746703+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:28.746937+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:29.747081+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:30.747285+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:31.747418+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:32.747578+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:33.747745+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:34.747909+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:35.748125+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:36.748335+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:37.748470+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:38.748626+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:39.748800+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:40.748964+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:41.749159+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:42.749412+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:43.749604+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:44.749819+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:45.749992+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:46.750183+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:47.750304+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:48.750446+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:49.750608+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:50.750751+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:51.750974+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:52.751159+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:53.751330+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:54.751491+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:55.751620+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:56.751812+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:57.752018+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:58.752222+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:59.752399+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:00.752523+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:01.752709+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:02.752842+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:03.752986+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:04.753261+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:05.753496+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:06.753698+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:07.753892+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:08.754100+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:09.754221+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:10.754369+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:11.754525+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:12.754703+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:13.754838+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:14.755071+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:15.755193+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:16.755338+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:17.755472+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:18.755634+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:19.755834+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:20.756042+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:21.756213+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:22.756422+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:23.756583+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:24.756802+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 12328960 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:25.756947+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:26.757115+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:27.757266+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:28.757421+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:29.757628+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:30.757821+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:31.758036+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:32.758195+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:33.758352+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:34.758517+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:35.758705+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:36.758898+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 12312576 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:37.759045+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:38.759251+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:39.759452+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:40.759609+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:41.759748+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:42.759966+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:43.760088+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:44.760217+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:45.760349+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:46.760525+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:47.760708+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:48.760900+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:49.761049+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:50.761198+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:51.761411+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:52.761615+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:53.761779+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:54.761994+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:55.762165+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:56.762400+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:57.762547+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:58.762705+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:59.762873+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:00.763098+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:01.763293+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:02.763517+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:03.763764+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:04.763962+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:05.764085+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:06.764307+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:07.764472+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:08.764683+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:09.764974+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:10.765146+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:11.765402+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:12.765611+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:13.765801+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:14.766005+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:15.766169+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:16.766374+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:17.766566+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:18.766725+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:19.766921+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:20.767078+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:21.767248+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:22.767436+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:23.767565+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:24.767715+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:25.767831+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:26.768019+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:27.768150+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:28.768301+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:29.768496+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:30.768630+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:31.768799+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:32.768977+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:33.769111+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:34.769286+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:35.769420+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:36.769583+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:37.769779+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:38.769960+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:39.770139+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:40.770321+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:41.770518+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 12304384 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:42.770717+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:43.770851+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:44.770988+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:45.771126+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:46.771344+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:47.771501+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:48.771694+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:49.771843+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:50.772024+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:51.772177+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6549 writes, 26K keys, 6549 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6549 writes, 1308 syncs, 5.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 322 writes, 617 keys, 322 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 322 writes, 146 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:52.772364+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:53.772491+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:54.772607+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:55.772768+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:56.773030+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:57.773174+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:58.773370+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:59.773578+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:00.773762+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:01.773934+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:02.774104+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 12288000 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:03.774290+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 12271616 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:04.774440+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 12271616 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:05.774693+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 12271616 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:06.774949+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 12271616 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:07.775120+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 12271616 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:08.775318+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:09.775539+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:10.775798+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:11.776030+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:12.776267+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:13.776465+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:14.776624+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:15.776878+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:16.777137+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:17.777351+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:18.777497+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:19.777708+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:20.777846+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:21.778004+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:22.778172+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 12263424 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:23.778333+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:24.778492+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:25.778614+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:26.778824+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:27.778956+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:28.779099+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:29.779241+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:30.779398+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:31.779557+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:32.779729+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:33.779895+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:34.780064+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:35.780238+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:36.780422+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:37.780561+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:38.780736+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:39.780845+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:40.780977+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:41.781129+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:42.781290+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:43.781421+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:44.781566+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:45.781757+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:46.781936+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:47.782366+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:48.782635+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:49.782999+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:50.783297+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:51.783498+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:52.783710+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:53.783917+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:54.784082+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:55.784208+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:56.784390+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:57.785383+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:58.787989+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:59.789026+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:00.790439+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:01.791018+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:02.791712+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:03.791957+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:04.792252+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:05.793204+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:06.793406+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:07.794243+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:08.795037+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:09.795751+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:10.796203+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:11.796550+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:12.796753+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:13.796969+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:14.797248+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:15.797451+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:16.797948+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:17.798136+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:18.798320+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:19.798497+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:20.798676+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:21.798935+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:22.799149+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:23.799315+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:24.799473+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:25.799733+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:26.799923+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:27.800204+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:28.800474+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:29.800629+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:30.800797+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:31.800987+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:32.801174+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:33.801347+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:34.801552+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:35.801789+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:36.802044+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:37.802217+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:38.802421+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:39.802601+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:40.802764+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:41.802925+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:42.803088+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 577.070556641s of 577.562988281s, submitted: 18
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:43.803233+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 12247040 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:44.803352+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 12230656 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:45.803594+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 12214272 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:46.803910+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 12197888 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:47.804057+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 12148736 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:48.804265+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 12132352 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:49.804455+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 12017664 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:50.804576+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 12001280 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:51.804695+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 11976704 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:52.804816+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.252580643s of 10.013707161s, submitted: 94
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 11968512 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:53.804986+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:54.805146+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:55.805362+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:56.805563+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:57.805712+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:58.805979+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:59.806115+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:00.806315+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:01.806433+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:02.806586+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:03.806733+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:04.806946+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:05.807121+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:06.807368+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:07.807562+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:08.807807+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:09.808005+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:10.808218+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:11.808377+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:12.808585+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:13.808814+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:14.809066+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:15.809318+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:16.809607+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:17.809853+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:18.809988+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:19.810170+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:20.810320+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:21.810544+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:22.810745+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:23.810946+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:24.811140+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:25.811301+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:26.811497+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:27.811626+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:28.811863+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:29.812058+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:30.812244+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:31.812414+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:32.812540+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:33.812837+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:34.812975+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:35.813068+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:36.813224+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:37.813330+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:38.813471+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:39.813616+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:40.813782+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:41.813924+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:42.814070+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:43.814203+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:44.814436+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:45.814593+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:46.814798+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:47.814981+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:48.815159+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:49.815372+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:50.815490+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:51.815608+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:52.815737+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:53.815901+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:54.816054+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:55.816213+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:56.816431+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:57.816600+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:58.816749+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:59.816923+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:00.817048+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:01.817251+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:02.817417+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:03.817763+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:04.817883+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:05.818106+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:06.818338+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:07.818501+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:08.818797+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:09.818916+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:10.819137+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:11.819280+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:12.819461+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:13.819622+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:14.819806+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:15.820088+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:16.820295+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:17.820459+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:18.820685+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:19.820865+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:20.821000+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:21.821154+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:22.821309+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:23.821471+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:24.821706+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:25.821852+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:26.822061+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:27.822249+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:28.822392+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:29.822541+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:30.822789+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:31.822960+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:32.823150+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:33.823434+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:34.823597+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:35.823730+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:36.823894+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:37.824062+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:38.824423+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:39.824563+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:40.824851+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:41.825026+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:42.825281+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:43.825434+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:44.825626+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:45.825819+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:46.826016+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:47.826260+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:48.826463+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:49.826622+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:50.826817+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:51.827022+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:52.827216+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:53.827390+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:54.827618+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:55.827883+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:56.828218+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:57.828427+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:58.828696+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:59.828857+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:00.828988+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:01.829150+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:02.829318+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:03.829498+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:04.829705+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:05.829938+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:06.830156+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:07.830319+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:08.830530+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:09.830690+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:10.830871+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:11.831033+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:12.831275+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:13.831462+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:14.831661+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:15.831854+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:16.832075+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:17.832258+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:18.832400+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:19.832517+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:20.832735+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:21.832879+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:22.833076+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:23.833238+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:24.833417+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:25.833562+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:26.833738+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:27.833964+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:28.834159+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:29.834324+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:30.834630+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:31.834898+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:32.835057+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:33.835244+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:34.835408+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:35.835609+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:36.835839+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:37.836010+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:38.836253+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:39.836591+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:40.836948+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:41.837094+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:42.837266+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:43.837435+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:44.837631+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:45.837847+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:46.838008+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:47.838127+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:48.838294+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:49.838442+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:50.838635+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:51.838858+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets getting new tickets!
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:52.839216+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _finish_auth 0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:52.840757+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:53.839373+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:54.839499+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:55.839638+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:56.839828+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:57.840026+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc ms_handle_reset ms_handle_reset con 0x563a6e001000
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3272136490
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3272136490,v1:192.168.122.100:6801/3272136490]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: get_auth_request con 0x563a6e9bd000 auth_method 0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:58.840215+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:59.840382+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:00.840756+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:01.840907+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:02.841056+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:03.841178+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:04.841384+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:05.841699+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 ms_handle_reset con 0x563a6b2eac00 session 0x563a6c2fdc00
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6c4d0c00
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:06.841875+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:07.842063+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:08.842215+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:09.842405+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:10.842588+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:11.842755+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:12.842950+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:13.843150+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:14.843330+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:15.843514+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:16.843740+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:17.843877+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:18.844124+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:19.844565+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:20.844784+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:21.844951+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:22.845129+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:23.845263+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:24.845494+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:25.845718+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:26.845984+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:27.846183+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:28.846368+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:29.846525+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:30.846732+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:31.846868+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:32.847018+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:33.847168+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:34.847299+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:35.847443+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:36.847635+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:37.847782+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:38.847911+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:39.848053+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:40.848183+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:41.848386+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:42.848569+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:43.848755+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:44.848913+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:45.849052+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:46.849270+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:47.849511+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:48.849725+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:49.849877+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:50.850086+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:51.850226+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:52.850428+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:53.850591+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:54.850795+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:55.850942+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:56.851172+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:57.851334+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:58.851492+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:59.851648+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:00.851833+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:01.851989+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:02.852167+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:03.852338+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:04.852494+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:05.852718+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:06.852975+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:07.853119+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:08.853254+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:09.853446+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:10.853590+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:11.853806+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:12.854031+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:13.854188+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:14.854332+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:15.854508+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:16.854758+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:17.855245+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:18.855430+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:19.855619+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:20.855810+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:21.856006+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:22.856169+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:23.856368+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:24.856570+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:25.856778+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:26.856974+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.857182+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.857336+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.857546+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.857718+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.858065+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.858210+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.858436+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.858634+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.859003+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 11960320 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.859203+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 ms_handle_reset con 0x563a6e84e000 session 0x563a6e4e61c0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e84e400
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.859365+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:38.859642+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.859865+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.860054+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.860211+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.860366+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.860509+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.860745+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.860905+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.861076+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.861237+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.861367+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.861552+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.861750+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.861886+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.862041+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.862227+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.862353+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.862501+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.862716+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.862892+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.863034+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.863178+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.863311+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.863521+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.863723+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.863883+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.864014+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.864187+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.864413+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 11829248 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.864584+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 314.308593750s of 314.990600586s, submitted: 12
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0xe528e/0x1cd000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073058 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 11804672 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.864777+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 11804672 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.864918+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 11796480 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.865044+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e84e000
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 11739136 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.865191+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 11706368 heap: 94371840 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.865339+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fc9ee000/0x0/0x4ffc00000, data 0x5552b1/0x63e000, compress 0x0/0x0/0x0, omap 0x1b53f, meta 0x2bb4ac1), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098599 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.865470+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _renew_subs
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 ms_handle_reset con 0x563a6e84e000 session 0x563a6c277a40
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.865640+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.865792+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.866003+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.866185+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.866324+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.866447+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.866618+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.866747+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.866917+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.867064+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.867231+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.867482+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.867723+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.867884+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.868021+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.868208+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.868410+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.868561+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.868708+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.868871+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 16310272 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.869134+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.869325+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.869561+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.869854+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.870048+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.870205+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.870521+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.870732+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.870988+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.871237+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.871480+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.871782+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.872051+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.872301+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.872463+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.872637+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.872821+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.872958+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.873114+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.873235+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.873399+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.873579+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.873734+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.873915+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.874188+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.874363+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.874544+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.874704+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.874856+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.874996+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.875159+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.875296+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.875470+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.875612+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.875769+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.875966+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.876207+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.876434+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.876609+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.876798+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.877184+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.877431+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.877630+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.877859+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.878058+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.878233+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.878484+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.878801+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.878991+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.879254+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.879634+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.880059+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.880355+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.880855+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.881155+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.882208+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.883010+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.883573+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.883979+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.884270+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.884574+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.884972+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.885604+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.885939+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.886263+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101925 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.886560+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.886810+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 16302080 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.887352+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: handle_auth_request added challenge on 0x563a6e84e800
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 16171008 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 92.236633301s of 94.354560852s, submitted: 27
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.887687+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _renew_subs
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fc9eb000/0x0/0x4ffc00000, data 0x556e4d/0x641000, compress 0x0/0x0/0x0, omap 0x1b739, meta 0x2bb48c7), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 16154624 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.887822+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fc9e6000/0x0/0x4ffc00000, data 0x558a3d/0x644000, compress 0x0/0x0/0x0, omap 0x1baae, meta 0x2bb4552), peers [1,2] op hist [0,0,0,0,0,1])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082279 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 15065088 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 153 ms_handle_reset con 0x563a6e84e800 session 0x563a6d0c1c00
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.888066+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 15360000 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.888215+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 15360000 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:46.888428+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.888593+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fce59000/0x0/0x4ffc00000, data 0xe8a1a/0x1d3000, compress 0x0/0x0/0x0, omap 0x1bb48, meta 0x2bb44b8), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.888718+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.888877+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.889039+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fce59000/0x0/0x4ffc00000, data 0xe8a1a/0x1d3000, compress 0x0/0x0/0x0, omap 0x1bb48, meta 0x2bb44b8), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.889194+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.662512779s of 10.560037613s, submitted: 26
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.889308+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.889472+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.889628+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.889807+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.890049+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.890378+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.890512+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.890729+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.890938+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.891153+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 15368192 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.891285+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.891473+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.891618+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.891788+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.892051+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.892187+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.892364+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.892510+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.892676+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.892826+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.892963+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.893107+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.893254+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.893406+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.893574+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.893735+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.893903+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.894051+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.894184+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.894401+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.894595+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.894821+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.895020+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.895238+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.895425+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.895628+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.895804+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.895973+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.896162+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.896314+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.896470+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.896703+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.896909+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.897064+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.897262+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.897402+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.897605+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.897818+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.898195+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.898368+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.898516+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.898727+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.898914+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.899102+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.899331+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.899806+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.899977+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.900208+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.900418+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.900588+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 6874 writes, 26K keys, 6874 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6874 writes, 1465 syncs, 4.69 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 325 writes, 616 keys, 325 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 325 writes, 157 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.900812+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.900963+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.901102+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.901291+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.901567+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.901733+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.901868+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.902121+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.902254+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.902502+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.902643+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.902822+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.903300+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.903466+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.903604+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.903831+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.903968+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.904096+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.904287+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.904415+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.904543+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.904726+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.904969+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.905125+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.905358+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.905583+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.905784+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.905915+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.906049+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.906170+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.906303+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.906453+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.906638+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.906816+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:26.906981+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:27.907115+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:28.907266+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:29.907404+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:30.907538+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:31.907774+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 15499264 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:32.907942+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fce54000/0x0/0x4ffc00000, data 0xea499/0x1d6000, compress 0x0/0x0/0x0, omap 0x1be8f, meta 0x2bb4171), peers [1,2] op hist [])
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 15351808 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'config diff' '{prefix=config diff}'
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:33.908070+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'config show' '{prefix=config show}'
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:49:06 compute-0 ceph-osd[85864]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:49:06 compute-0 ceph-osd[85864]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083268 data_alloc: 218103808 data_used: 11126
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 14958592 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:34.908302+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 14655488 heap: 99033088 old mem: 2845415832 new mem: 2845415832
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: tick
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_tickets
Jan 31 08:49:06 compute-0 ceph-osd[85864]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:35.908468+0000)
Jan 31 08:49:06 compute-0 ceph-osd[85864]: do_command 'log dump' '{prefix=log dump}'
Jan 31 08:49:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 31 08:49:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4285517524' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 8.998333557808532e-07 of space, bias 1.0, pg target 0.000269950006734256 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.912953184942191e-06 of space, bias 4.0, pg target 0.002295543821930629 quantized to 16 (current 16)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:06 compute-0 ceph-mgr[75591]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:49:06 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 31 08:49:06 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974940673' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3176308253' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3241229250' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3901501562' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4285517524' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 31 08:49:06 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3974940673' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 31 08:49:07 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14774 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 31 08:49:07 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612905190' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 31 08:49:07 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14780 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:07 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:07 compute-0 ceph-mon[75294]: pgmap v1712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:07 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2612905190' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 31 08:49:07 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14778 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:08 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14782 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:08 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:08 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14786 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:08 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} v 0)
Jan 31 08:49:08 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:49:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 31 08:49:09 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062705658' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 31 08:49:09 compute-0 ceph-mon[75294]: from='client.14774 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:09 compute-0 ceph-mon[75294]: from='client.14780 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:09 compute-0 ceph-mon[75294]: from='client.14778 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:09 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:49:09 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 08:49:09 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14790 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:09 compute-0 systemd[1]: Started Hostname Service.
Jan 31 08:49:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} v 0)
Jan 31 08:49:09 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:49:09 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 31 08:49:09 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347787802' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 31 08:49:09 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14794 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:10 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:11 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14798 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:11 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 31 08:49:11 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605267530' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 31 08:49:11 compute-0 ceph-mon[75294]: from='client.14782 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:11 compute-0 ceph-mon[75294]: pgmap v1713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:11 compute-0 ceph-mon[75294]: from='client.14786 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:11 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2062705658' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 31 08:49:11 compute-0 ceph-mon[75294]: from='client.14790 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:11 compute-0 ceph-mon[75294]: from='mgr.14122 192.168.122.100:0/1638718954' entity='mgr.compute-0.lhuavc' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ockecq", "name": "rgw_frontends"} : dispatch
Jan 31 08:49:11 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/347787802' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 31 08:49:11 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14800 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 31 08:49:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751955931' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 08:49:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 08:49:12 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='client.14794 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: pgmap v1714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='client.14798 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/605267530' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='client.14800 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2751955931' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 08:49:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 31 08:49:12 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083859132' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 31 08:49:12 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:13 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14814 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 31 08:49:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458573098' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 31 08:49:14 compute-0 ceph-mon[75294]: pgmap v1715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:14 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/4083859132' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 31 08:49:14 compute-0 ceph-mon[75294]: from='client.14814 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:14 compute-0 podman[268349]: 2026-01-31 08:49:14.21454241 +0000 UTC m=+0.082140499 container health_status c5a4afab845a33b9645d030523c7f186c6f255f5bffe6c10937050499d2e78e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bde7fb4a54ccd298914962bcf77c9debfd66e2e80d7a74fee649b83b7ce4cd15-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f-211cbd566ced265554f6f45f3b6eed058d98a67789d7e179b2cc865a032d4f9f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 08:49:14 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:14 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 31 08:49:14 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490791199' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 31 08:49:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 31 08:49:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3691806989' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 31 08:49:15 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1458573098' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 31 08:49:15 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/2490791199' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 31 08:49:15 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 31 08:49:15 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190297764' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 31 08:49:16 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14824 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:16 compute-0 ceph-mon[75294]: pgmap v1716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:16 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3691806989' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 31 08:49:16 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/190297764' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 31 08:49:16 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:16 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 31 08:49:16 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773482754' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 31 08:49:17 compute-0 ceph-mon[75294]: from='client.14824 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:17 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/3773482754' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 31 08:49:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 31 08:49:17 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498281065' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 31 08:49:17 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14830 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:49:17 compute-0 ceph-mon[75294]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:18 compute-0 ceph-mon[75294]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 31 08:49:18 compute-0 ceph-mon[75294]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713067900' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 31 08:49:18 compute-0 ceph-mon[75294]: pgmap v1717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:18 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/1498281065' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 31 08:49:18 compute-0 ceph-mon[75294]: from='client.? 192.168.122.100:0/713067900' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 31 08:49:18 compute-0 ceph-mgr[75591]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:18 compute-0 ceph-mgr[75591]: log_channel(audit) log [DBG] : from='client.14834 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
